00:00:00.001 Started by upstream project "autotest-per-patch" build number 132729 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.123 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.124 The recommended git tool is: git 00:00:00.124 using credential 00000000-0000-0000-0000-000000000002 00:00:00.126 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.169 Fetching changes from the remote Git repository 00:00:00.174 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.214 Using shallow fetch with depth 1 00:00:00.214 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.214 > git --version # timeout=10 00:00:00.259 > git --version # 'git version 2.39.2' 00:00:00.259 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.283 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.283 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.334 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.346 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.357 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:07.357 > git config core.sparsecheckout # timeout=10 00:00:07.368 > git read-tree -mu HEAD # timeout=10 00:00:07.384 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:07.412 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:07.412 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:07.501 [Pipeline] Start of Pipeline 00:00:07.514 [Pipeline] library 00:00:07.516 Loading library shm_lib@master 00:00:07.516 Library shm_lib@master is cached. Copying from home. 00:00:07.532 [Pipeline] node 00:00:07.543 Running on VM-host-SM17 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:07.544 [Pipeline] { 00:00:07.552 [Pipeline] catchError 00:00:07.553 [Pipeline] { 00:00:07.565 [Pipeline] wrap 00:00:07.573 [Pipeline] { 00:00:07.579 [Pipeline] stage 00:00:07.581 [Pipeline] { (Prologue) 00:00:07.597 [Pipeline] echo 00:00:07.599 Node: VM-host-SM17 00:00:07.605 [Pipeline] cleanWs 00:00:07.614 [WS-CLEANUP] Deleting project workspace... 00:00:07.614 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.620 [WS-CLEANUP] done 00:00:07.801 [Pipeline] setCustomBuildProperty 00:00:07.871 [Pipeline] httpRequest 00:00:08.247 [Pipeline] echo 00:00:08.248 Sorcerer 10.211.164.101 is alive 00:00:08.255 [Pipeline] retry 00:00:08.256 [Pipeline] { 00:00:08.270 [Pipeline] httpRequest 00:00:08.274 HttpMethod: GET 00:00:08.274 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.275 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.295 Response Code: HTTP/1.1 200 OK 00:00:08.295 Success: Status code 200 is in the accepted range: 200,404 00:00:08.296 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:12.619 [Pipeline] } 00:00:12.636 [Pipeline] // retry 00:00:12.642 [Pipeline] sh 00:00:12.925 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:12.939 [Pipeline] httpRequest 00:00:13.844 [Pipeline] echo 00:00:13.846 Sorcerer 10.211.164.101 is alive 00:00:13.856 [Pipeline] retry 00:00:13.859 [Pipeline] { 00:00:13.875 [Pipeline] httpRequest 00:00:13.880 HttpMethod: GET 00:00:13.880 URL: http://10.211.164.101/packages/spdk_37ef4f42e32f7c1cedf89c5cac3d720a4a15e694.tar.gz 00:00:13.881 Sending request to url: http://10.211.164.101/packages/spdk_37ef4f42e32f7c1cedf89c5cac3d720a4a15e694.tar.gz 00:00:13.882 Response Code: HTTP/1.1 200 OK 00:00:13.882 Success: Status code 200 is in the accepted range: 200,404 00:00:13.883 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_37ef4f42e32f7c1cedf89c5cac3d720a4a15e694.tar.gz 00:00:33.752 [Pipeline] } 00:00:33.767 [Pipeline] // retry 00:00:33.773 [Pipeline] sh 00:00:34.052 + tar --no-same-owner -xf spdk_37ef4f42e32f7c1cedf89c5cac3d720a4a15e694.tar.gz 00:00:36.595 [Pipeline] sh 00:00:36.876 + git -C spdk log --oneline -n5 00:00:36.876 37ef4f42e bdev/nvme: use poll_group's fd_group to register interrupts 00:00:36.876 88d8055fc nvme: add poll_group interrupt callback 00:00:36.876 e9db16374 nvme: add spdk_nvme_poll_group_get_fd_group() 00:00:36.876 cf089b398 thread: fd_group-based interrupts 00:00:36.876 8a4656bc1 thread: move interrupt allocation to a function 00:00:36.897 [Pipeline] writeFile 00:00:36.914 [Pipeline] sh 00:00:37.196 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:37.210 [Pipeline] sh 00:00:37.492 + cat autorun-spdk.conf 00:00:37.492 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:37.492 SPDK_TEST_NVMF=1 00:00:37.492 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:37.492 SPDK_TEST_URING=1 00:00:37.492 SPDK_TEST_USDT=1 00:00:37.492 SPDK_RUN_UBSAN=1 00:00:37.492 NET_TYPE=virt 00:00:37.492 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:37.499 RUN_NIGHTLY=0 00:00:37.502 [Pipeline] } 00:00:37.516 [Pipeline] // stage 00:00:37.533 [Pipeline] stage 00:00:37.537 [Pipeline] { (Run VM) 00:00:37.552 [Pipeline] sh 00:00:37.835 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:37.835 + echo 'Start stage prepare_nvme.sh' 00:00:37.835 Start stage prepare_nvme.sh 00:00:37.835 + [[ -n 5 ]] 00:00:37.835 + disk_prefix=ex5 00:00:37.835 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:00:37.835 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:00:37.835 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:00:37.835 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:37.835 ++ SPDK_TEST_NVMF=1 00:00:37.835 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:37.835 ++ SPDK_TEST_URING=1 00:00:37.835 ++ SPDK_TEST_USDT=1 00:00:37.835 ++ SPDK_RUN_UBSAN=1 00:00:37.835 ++ NET_TYPE=virt 00:00:37.835 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:37.835 ++ RUN_NIGHTLY=0 00:00:37.835 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:37.835 + nvme_files=() 00:00:37.835 + declare -A nvme_files 00:00:37.835 + backend_dir=/var/lib/libvirt/images/backends 00:00:37.835 + nvme_files['nvme.img']=5G 00:00:37.835 + nvme_files['nvme-cmb.img']=5G 00:00:37.835 + nvme_files['nvme-multi0.img']=4G 00:00:37.835 + nvme_files['nvme-multi1.img']=4G 00:00:37.835 + nvme_files['nvme-multi2.img']=4G 00:00:37.835 + nvme_files['nvme-openstack.img']=8G 00:00:37.835 + nvme_files['nvme-zns.img']=5G 00:00:37.835 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:37.835 + (( SPDK_TEST_FTL == 1 )) 00:00:37.835 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:37.835 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:37.835 + for nvme in "${!nvme_files[@]}" 00:00:37.835 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi2.img -s 4G 00:00:37.835 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:37.835 + for nvme in "${!nvme_files[@]}" 00:00:37.835 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-cmb.img -s 5G 00:00:37.835 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:37.835 + for nvme in "${!nvme_files[@]}" 00:00:37.835 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-openstack.img -s 8G 00:00:37.835 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:37.835 + for nvme in "${!nvme_files[@]}" 00:00:37.835 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-zns.img -s 5G 00:00:37.835 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:37.835 + for nvme in "${!nvme_files[@]}" 00:00:37.835 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi1.img -s 4G 00:00:37.835 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:37.835 + for nvme in "${!nvme_files[@]}" 00:00:37.835 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi0.img -s 4G 00:00:37.835 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:37.835 + for nvme in "${!nvme_files[@]}" 00:00:37.835 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme.img -s 5G 00:00:37.835 Formatting '/var/lib/libvirt/images/backends/ex5-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:37.835 ++ sudo grep -rl ex5-nvme.img /etc/libvirt/qemu 00:00:37.835 + echo 'End stage prepare_nvme.sh' 00:00:37.835 End stage prepare_nvme.sh 00:00:37.848 [Pipeline] sh 00:00:38.131 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:38.131 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex5-nvme.img -b /var/lib/libvirt/images/backends/ex5-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img -H -a -v -f fedora39 00:00:38.131 00:00:38.131 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:00:38.131 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:00:38.131 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:38.131 HELP=0 00:00:38.131 DRY_RUN=0 00:00:38.131 NVME_FILE=/var/lib/libvirt/images/backends/ex5-nvme.img,/var/lib/libvirt/images/backends/ex5-nvme-multi0.img, 00:00:38.131 NVME_DISKS_TYPE=nvme,nvme, 00:00:38.131 NVME_AUTO_CREATE=0 00:00:38.131 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img, 00:00:38.131 NVME_CMB=,, 00:00:38.131 NVME_PMR=,, 00:00:38.131 NVME_ZNS=,, 00:00:38.131 NVME_MS=,, 00:00:38.131 NVME_FDP=,, 00:00:38.131 SPDK_VAGRANT_DISTRO=fedora39 00:00:38.131 SPDK_VAGRANT_VMCPU=10 00:00:38.131 SPDK_VAGRANT_VMRAM=12288 00:00:38.131 SPDK_VAGRANT_PROVIDER=libvirt 00:00:38.131 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:38.131 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:38.131 SPDK_OPENSTACK_NETWORK=0 00:00:38.131 VAGRANT_PACKAGE_BOX=0 00:00:38.131 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:38.131 FORCE_DISTRO=true 00:00:38.131 VAGRANT_BOX_VERSION= 00:00:38.131 EXTRA_VAGRANTFILES= 00:00:38.131 NIC_MODEL=e1000 00:00:38.131 00:00:38.131 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt' 00:00:38.131 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:40.667 Bringing machine 'default' up with 'libvirt' provider... 00:00:41.235 ==> default: Creating image (snapshot of base box volume). 00:00:41.235 ==> default: Creating domain with the following settings... 00:00:41.235 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733492380_c6bb13d019e66568d9d6 00:00:41.235 ==> default: -- Domain type: kvm 00:00:41.235 ==> default: -- Cpus: 10 00:00:41.235 ==> default: -- Feature: acpi 00:00:41.235 ==> default: -- Feature: apic 00:00:41.235 ==> default: -- Feature: pae 00:00:41.235 ==> default: -- Memory: 12288M 00:00:41.235 ==> default: -- Memory Backing: hugepages: 00:00:41.235 ==> default: -- Management MAC: 00:00:41.235 ==> default: -- Loader: 00:00:41.235 ==> default: -- Nvram: 00:00:41.235 ==> default: -- Base box: spdk/fedora39 00:00:41.235 ==> default: -- Storage pool: default 00:00:41.235 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733492380_c6bb13d019e66568d9d6.img (20G) 00:00:41.235 ==> default: -- Volume Cache: default 00:00:41.235 ==> default: -- Kernel: 00:00:41.235 ==> default: -- Initrd: 00:00:41.235 ==> default: -- Graphics Type: vnc 00:00:41.235 ==> default: -- Graphics Port: -1 00:00:41.235 ==> default: -- Graphics IP: 127.0.0.1 00:00:41.235 ==> default: -- Graphics Password: Not defined 00:00:41.235 ==> default: -- Video Type: cirrus 00:00:41.235 ==> default: -- Video VRAM: 9216 00:00:41.235 ==> default: -- Sound Type: 00:00:41.235 ==> default: -- Keymap: en-us 00:00:41.235 ==> default: -- TPM Path: 00:00:41.235 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:41.235 ==> default: -- Command line args: 00:00:41.235 ==> default: -> value=-device, 00:00:41.235 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:41.235 ==> default: -> value=-drive, 00:00:41.235 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme.img,if=none,id=nvme-0-drive0, 00:00:41.235 ==> default: -> value=-device, 00:00:41.235 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:41.235 ==> default: -> value=-device, 00:00:41.235 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:41.235 ==> default: -> value=-drive, 00:00:41.235 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:41.235 ==> default: -> value=-device, 00:00:41.235 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:41.235 ==> default: -> value=-drive, 00:00:41.235 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:41.235 ==> default: -> value=-device, 00:00:41.235 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:41.235 ==> default: -> value=-drive, 00:00:41.235 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:41.235 ==> default: -> value=-device, 00:00:41.235 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:41.494 ==> default: Creating shared folders metadata... 00:00:41.494 ==> default: Starting domain. 00:00:42.874 ==> default: Waiting for domain to get an IP address... 00:01:00.983 ==> default: Waiting for SSH to become available... 00:01:00.983 ==> default: Configuring and enabling network interfaces... 00:01:03.517 default: SSH address: 192.168.121.121:22 00:01:03.517 default: SSH username: vagrant 00:01:03.517 default: SSH auth method: private key 00:01:06.054 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:14.165 ==> default: Mounting SSHFS shared folder... 00:01:15.100 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:15.100 ==> default: Checking Mount.. 00:01:16.476 ==> default: Folder Successfully Mounted! 00:01:16.476 ==> default: Running provisioner: file... 00:01:17.411 default: ~/.gitconfig => .gitconfig 00:01:17.978 00:01:17.978 SUCCESS! 00:01:17.978 00:01:17.978 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:17.978 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:17.978 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:17.978 00:01:17.987 [Pipeline] } 00:01:18.003 [Pipeline] // stage 00:01:18.012 [Pipeline] dir 00:01:18.013 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt 00:01:18.014 [Pipeline] { 00:01:18.027 [Pipeline] catchError 00:01:18.029 [Pipeline] { 00:01:18.042 [Pipeline] sh 00:01:18.322 + vagrant ssh-config --host vagrant 00:01:18.322 + sed -ne /^Host/,$p 00:01:18.322 + tee ssh_conf 00:01:21.610 Host vagrant 00:01:21.610 HostName 192.168.121.121 00:01:21.610 User vagrant 00:01:21.610 Port 22 00:01:21.610 UserKnownHostsFile /dev/null 00:01:21.610 StrictHostKeyChecking no 00:01:21.610 PasswordAuthentication no 00:01:21.610 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:21.610 IdentitiesOnly yes 00:01:21.610 LogLevel FATAL 00:01:21.610 ForwardAgent yes 00:01:21.610 ForwardX11 yes 00:01:21.610 00:01:21.626 [Pipeline] withEnv 00:01:21.629 [Pipeline] { 00:01:21.644 [Pipeline] sh 00:01:21.925 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:21.926 source /etc/os-release 00:01:21.926 [[ -e /image.version ]] && img=$(< /image.version) 00:01:21.926 # Minimal, systemd-like check. 00:01:21.926 if [[ -e /.dockerenv ]]; then 00:01:21.926 # Clear garbage from the node's name: 00:01:21.926 # agt-er_autotest_547-896 -> autotest_547-896 00:01:21.926 # $HOSTNAME is the actual container id 00:01:21.926 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:21.926 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:21.926 # We can assume this is a mount from a host where container is running, 00:01:21.926 # so fetch its hostname to easily identify the target swarm worker. 00:01:21.926 container="$(< /etc/hostname) ($agent)" 00:01:21.926 else 00:01:21.926 # Fallback 00:01:21.926 container=$agent 00:01:21.926 fi 00:01:21.926 fi 00:01:21.926 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:21.926 00:01:22.198 [Pipeline] } 00:01:22.214 [Pipeline] // withEnv 00:01:22.224 [Pipeline] setCustomBuildProperty 00:01:22.239 [Pipeline] stage 00:01:22.242 [Pipeline] { (Tests) 00:01:22.258 [Pipeline] sh 00:01:22.539 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:22.814 [Pipeline] sh 00:01:23.094 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:23.368 [Pipeline] timeout 00:01:23.369 Timeout set to expire in 1 hr 0 min 00:01:23.371 [Pipeline] { 00:01:23.388 [Pipeline] sh 00:01:23.671 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:24.240 HEAD is now at 37ef4f42e bdev/nvme: use poll_group's fd_group to register interrupts 00:01:24.252 [Pipeline] sh 00:01:24.533 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:24.807 [Pipeline] sh 00:01:25.088 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:25.367 [Pipeline] sh 00:01:25.744 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:01:25.744 ++ readlink -f spdk_repo 00:01:25.744 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:25.744 + [[ -n /home/vagrant/spdk_repo ]] 00:01:25.744 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:25.744 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:25.744 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:25.744 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:25.744 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:25.744 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:01:25.744 + cd /home/vagrant/spdk_repo 00:01:25.744 + source /etc/os-release 00:01:25.745 ++ NAME='Fedora Linux' 00:01:25.745 ++ VERSION='39 (Cloud Edition)' 00:01:25.745 ++ ID=fedora 00:01:25.745 ++ VERSION_ID=39 00:01:25.745 ++ VERSION_CODENAME= 00:01:25.745 ++ PLATFORM_ID=platform:f39 00:01:25.745 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:25.745 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:25.745 ++ LOGO=fedora-logo-icon 00:01:25.745 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:25.745 ++ HOME_URL=https://fedoraproject.org/ 00:01:25.745 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:25.745 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:25.745 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:25.745 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:25.745 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:25.745 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:25.745 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:25.745 ++ SUPPORT_END=2024-11-12 00:01:25.745 ++ VARIANT='Cloud Edition' 00:01:25.745 ++ VARIANT_ID=cloud 00:01:25.745 + uname -a 00:01:25.745 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:25.745 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:26.314 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:26.314 Hugepages 00:01:26.314 node hugesize free / total 00:01:26.314 node0 1048576kB 0 / 0 00:01:26.314 node0 2048kB 0 / 0 00:01:26.314 00:01:26.314 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:26.314 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:26.314 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:26.314 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:26.314 + rm -f /tmp/spdk-ld-path 00:01:26.314 + source autorun-spdk.conf 00:01:26.314 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:26.314 ++ SPDK_TEST_NVMF=1 00:01:26.314 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:26.314 ++ SPDK_TEST_URING=1 00:01:26.314 ++ SPDK_TEST_USDT=1 00:01:26.314 ++ SPDK_RUN_UBSAN=1 00:01:26.314 ++ NET_TYPE=virt 00:01:26.314 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:26.314 ++ RUN_NIGHTLY=0 00:01:26.314 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:26.314 + [[ -n '' ]] 00:01:26.314 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:26.314 + for M in /var/spdk/build-*-manifest.txt 00:01:26.314 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:26.314 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:26.573 + for M in /var/spdk/build-*-manifest.txt 00:01:26.573 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:26.573 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:26.573 + for M in /var/spdk/build-*-manifest.txt 00:01:26.573 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:26.573 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:26.573 ++ uname 00:01:26.573 + [[ Linux == \L\i\n\u\x ]] 00:01:26.573 + sudo dmesg -T 00:01:26.573 + sudo dmesg --clear 00:01:26.573 + dmesg_pid=5215 00:01:26.573 + [[ Fedora Linux == FreeBSD ]] 00:01:26.573 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:26.573 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:26.573 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:26.573 + sudo dmesg -Tw 00:01:26.573 + [[ -x /usr/src/fio-static/fio ]] 00:01:26.573 + export FIO_BIN=/usr/src/fio-static/fio 00:01:26.573 + FIO_BIN=/usr/src/fio-static/fio 00:01:26.573 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:26.573 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:26.573 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:26.573 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:26.573 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:26.573 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:26.573 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:26.573 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:26.573 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:26.573 13:40:25 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:26.573 13:40:25 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:26.573 13:40:25 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:26.573 13:40:25 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:26.573 13:40:25 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:26.573 13:40:25 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_URING=1 00:01:26.573 13:40:25 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_TEST_USDT=1 00:01:26.573 13:40:25 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:01:26.573 13:40:25 -- spdk_repo/autorun-spdk.conf@7 -- $ NET_TYPE=virt 00:01:26.573 13:40:25 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:26.573 13:40:25 -- spdk_repo/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:01:26.573 13:40:25 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:26.573 13:40:25 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:26.573 13:40:25 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:26.573 13:40:25 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:26.573 13:40:25 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:26.573 13:40:25 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:26.573 13:40:25 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:26.573 13:40:25 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:26.573 13:40:25 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:26.573 13:40:25 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:26.573 13:40:25 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:26.573 13:40:25 -- paths/export.sh@5 -- $ export PATH 00:01:26.573 13:40:25 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:26.573 13:40:25 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:26.573 13:40:25 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:26.573 13:40:25 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733492425.XXXXXX 00:01:26.573 13:40:25 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733492425.RDzJbV 00:01:26.573 13:40:25 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:26.573 13:40:25 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:26.573 13:40:25 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:26.573 13:40:25 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:26.573 13:40:25 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:26.573 13:40:25 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:26.573 13:40:25 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:26.573 13:40:25 -- common/autotest_common.sh@10 -- $ set +x 00:01:26.832 13:40:25 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:01:26.832 13:40:25 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:26.832 13:40:25 -- pm/common@17 -- $ local monitor 00:01:26.832 13:40:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:26.832 13:40:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:26.832 13:40:25 -- pm/common@25 -- $ sleep 1 00:01:26.832 13:40:25 -- pm/common@21 -- $ date +%s 00:01:26.832 13:40:25 -- pm/common@21 -- $ date +%s 00:01:26.832 13:40:25 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733492425 00:01:26.832 13:40:25 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733492425 00:01:26.832 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733492425_collect-vmstat.pm.log 00:01:26.832 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733492425_collect-cpu-load.pm.log 00:01:27.772 13:40:26 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:27.772 13:40:26 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:27.772 13:40:26 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:27.772 13:40:26 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:27.772 13:40:26 -- spdk/autobuild.sh@16 -- $ date -u 00:01:27.772 Fri Dec 6 01:40:26 PM UTC 2024 00:01:27.772 13:40:26 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:27.772 v25.01-pre-311-g37ef4f42e 00:01:27.772 13:40:27 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:27.772 13:40:27 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:27.772 13:40:27 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:27.772 13:40:27 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:27.772 13:40:27 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:27.772 13:40:27 -- common/autotest_common.sh@10 -- $ set +x 00:01:27.772 ************************************ 00:01:27.772 START TEST ubsan 00:01:27.772 ************************************ 00:01:27.772 using ubsan 00:01:27.772 13:40:27 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:27.772 00:01:27.773 real 0m0.000s 00:01:27.773 user 0m0.000s 00:01:27.773 sys 0m0.000s 00:01:27.773 13:40:27 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:27.773 ************************************ 00:01:27.773 END TEST ubsan 00:01:27.773 ************************************ 00:01:27.773 13:40:27 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:27.773 13:40:27 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:27.773 13:40:27 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:27.773 13:40:27 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:27.773 13:40:27 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:27.773 13:40:27 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:27.773 13:40:27 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:27.773 13:40:27 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:27.773 13:40:27 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:27.773 13:40:27 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-shared 00:01:27.773 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:27.773 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:28.342 Using 'verbs' RDMA provider 00:01:44.174 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:01:56.381 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:01:56.381 Creating mk/config.mk...done. 00:01:56.381 Creating mk/cc.flags.mk...done. 00:01:56.381 Type 'make' to build. 00:01:56.381 13:40:55 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:01:56.381 13:40:55 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:56.381 13:40:55 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:56.381 13:40:55 -- common/autotest_common.sh@10 -- $ set +x 00:01:56.381 ************************************ 00:01:56.381 START TEST make 00:01:56.381 ************************************ 00:01:56.381 13:40:55 make -- common/autotest_common.sh@1129 -- $ make -j10 00:01:56.381 make[1]: Nothing to be done for 'all'. 00:02:08.588 The Meson build system 00:02:08.588 Version: 1.5.0 00:02:08.588 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:08.588 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:08.588 Build type: native build 00:02:08.588 Program cat found: YES (/usr/bin/cat) 00:02:08.588 Project name: DPDK 00:02:08.588 Project version: 24.03.0 00:02:08.588 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:08.588 C linker for the host machine: cc ld.bfd 2.40-14 00:02:08.588 Host machine cpu family: x86_64 00:02:08.588 Host machine cpu: x86_64 00:02:08.588 Message: ## Building in Developer Mode ## 00:02:08.588 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:08.588 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:08.588 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:08.588 Program python3 found: YES (/usr/bin/python3) 00:02:08.588 Program cat found: YES (/usr/bin/cat) 00:02:08.588 Compiler for C supports arguments -march=native: YES 00:02:08.588 Checking for size of "void *" : 8 00:02:08.588 Checking for size of "void *" : 8 (cached) 00:02:08.588 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:08.588 Library m found: YES 00:02:08.588 Library numa found: YES 00:02:08.588 Has header "numaif.h" : YES 00:02:08.588 Library fdt found: NO 00:02:08.588 Library execinfo found: NO 00:02:08.588 Has header "execinfo.h" : YES 00:02:08.588 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:08.588 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:08.588 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:08.588 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:08.588 Run-time dependency openssl found: YES 3.1.1 00:02:08.588 Run-time dependency libpcap found: YES 1.10.4 00:02:08.588 Has header "pcap.h" with dependency libpcap: YES 00:02:08.588 Compiler for C supports arguments -Wcast-qual: YES 00:02:08.588 Compiler for C supports arguments -Wdeprecated: YES 00:02:08.588 Compiler for C supports arguments -Wformat: YES 00:02:08.588 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:08.588 Compiler for C supports arguments -Wformat-security: NO 00:02:08.588 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:08.588 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:08.588 Compiler for C supports arguments -Wnested-externs: YES 00:02:08.589 Compiler for C supports arguments -Wold-style-definition: YES 00:02:08.589 Compiler for C supports arguments -Wpointer-arith: YES 00:02:08.589 Compiler for C supports arguments -Wsign-compare: YES 00:02:08.589 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:08.589 Compiler for C supports arguments -Wundef: YES 00:02:08.589 Compiler for C supports arguments -Wwrite-strings: YES 00:02:08.589 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:08.589 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:08.589 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:08.589 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:08.589 Program objdump found: YES (/usr/bin/objdump) 00:02:08.589 Compiler for C supports arguments -mavx512f: YES 00:02:08.589 Checking if "AVX512 checking" compiles: YES 00:02:08.589 Fetching value of define "__SSE4_2__" : 1 00:02:08.589 Fetching value of define "__AES__" : 1 00:02:08.589 Fetching value of define "__AVX__" : 1 00:02:08.589 Fetching value of define "__AVX2__" : 1 00:02:08.589 Fetching value of define "__AVX512BW__" : (undefined) 00:02:08.589 Fetching value of define "__AVX512CD__" : (undefined) 00:02:08.589 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:08.589 Fetching value of define "__AVX512F__" : (undefined) 00:02:08.589 Fetching value of define "__AVX512VL__" : (undefined) 00:02:08.589 Fetching value of define "__PCLMUL__" : 1 00:02:08.589 Fetching value of define "__RDRND__" : 1 00:02:08.589 Fetching value of define "__RDSEED__" : 1 00:02:08.589 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:08.589 Fetching value of define "__znver1__" : (undefined) 00:02:08.589 Fetching value of define "__znver2__" : (undefined) 00:02:08.589 Fetching value of define "__znver3__" : (undefined) 00:02:08.589 Fetching value of define "__znver4__" : (undefined) 00:02:08.589 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:08.589 Message: lib/log: Defining dependency "log" 00:02:08.589 Message: lib/kvargs: Defining dependency "kvargs" 00:02:08.589 Message: lib/telemetry: Defining dependency "telemetry" 00:02:08.589 Checking for function "getentropy" : NO 00:02:08.589 Message: lib/eal: Defining dependency "eal" 00:02:08.589 Message: lib/ring: Defining dependency "ring" 00:02:08.589 Message: lib/rcu: Defining dependency "rcu" 00:02:08.589 Message: lib/mempool: Defining dependency "mempool" 00:02:08.589 Message: lib/mbuf: Defining dependency "mbuf" 00:02:08.589 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:08.589 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:08.589 Compiler for C supports arguments -mpclmul: YES 00:02:08.589 Compiler for C supports arguments -maes: YES 00:02:08.589 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:08.589 Compiler for C supports arguments -mavx512bw: YES 00:02:08.589 Compiler for C supports arguments -mavx512dq: YES 00:02:08.589 Compiler for C supports arguments -mavx512vl: YES 00:02:08.589 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:08.589 Compiler for C supports arguments -mavx2: YES 00:02:08.589 Compiler for C supports arguments -mavx: YES 00:02:08.589 Message: lib/net: Defining dependency "net" 00:02:08.589 Message: lib/meter: Defining dependency "meter" 00:02:08.589 Message: lib/ethdev: Defining dependency "ethdev" 00:02:08.589 Message: lib/pci: Defining dependency "pci" 00:02:08.589 Message: lib/cmdline: Defining dependency "cmdline" 00:02:08.589 Message: lib/hash: Defining dependency "hash" 00:02:08.589 Message: lib/timer: Defining dependency "timer" 00:02:08.589 Message: lib/compressdev: Defining dependency "compressdev" 00:02:08.589 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:08.589 Message: lib/dmadev: Defining dependency "dmadev" 00:02:08.589 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:08.589 Message: lib/power: Defining dependency "power" 00:02:08.589 Message: lib/reorder: Defining dependency "reorder" 00:02:08.589 Message: lib/security: Defining dependency "security" 00:02:08.589 Has header "linux/userfaultfd.h" : YES 00:02:08.589 Has header "linux/vduse.h" : YES 00:02:08.589 Message: lib/vhost: Defining dependency "vhost" 00:02:08.589 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:08.589 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:08.589 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:08.589 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:08.589 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:08.589 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:08.589 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:08.589 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:08.589 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:08.589 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:08.589 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:08.589 Configuring doxy-api-html.conf using configuration 00:02:08.589 Configuring doxy-api-man.conf using configuration 00:02:08.589 Program mandb found: YES (/usr/bin/mandb) 00:02:08.589 Program sphinx-build found: NO 00:02:08.589 Configuring rte_build_config.h using configuration 00:02:08.589 Message: 00:02:08.589 ================= 00:02:08.589 Applications Enabled 00:02:08.589 ================= 00:02:08.589 00:02:08.589 apps: 00:02:08.589 00:02:08.589 00:02:08.589 Message: 00:02:08.589 ================= 00:02:08.589 Libraries Enabled 00:02:08.589 ================= 00:02:08.589 00:02:08.589 libs: 00:02:08.589 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:08.589 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:08.589 cryptodev, dmadev, power, reorder, security, vhost, 00:02:08.589 00:02:08.589 Message: 00:02:08.589 =============== 00:02:08.589 Drivers Enabled 00:02:08.589 =============== 00:02:08.589 00:02:08.589 common: 00:02:08.589 00:02:08.589 bus: 00:02:08.589 pci, vdev, 00:02:08.589 mempool: 00:02:08.589 ring, 00:02:08.589 dma: 00:02:08.589 00:02:08.589 net: 00:02:08.589 00:02:08.589 crypto: 00:02:08.589 00:02:08.589 compress: 00:02:08.589 00:02:08.589 vdpa: 00:02:08.589 00:02:08.589 00:02:08.589 Message: 00:02:08.589 ================= 00:02:08.589 Content Skipped 00:02:08.589 ================= 00:02:08.589 00:02:08.589 apps: 00:02:08.589 dumpcap: explicitly disabled via build config 00:02:08.589 graph: explicitly disabled via build config 00:02:08.589 pdump: explicitly disabled via build config 00:02:08.589 proc-info: explicitly disabled via build config 00:02:08.589 test-acl: explicitly disabled via build config 00:02:08.589 test-bbdev: explicitly disabled via build config 00:02:08.589 test-cmdline: explicitly disabled via build config 00:02:08.589 test-compress-perf: explicitly disabled via build config 00:02:08.589 test-crypto-perf: explicitly disabled via build config 00:02:08.589 test-dma-perf: explicitly disabled via build config 00:02:08.589 test-eventdev: explicitly disabled via build config 00:02:08.590 test-fib: explicitly disabled via build config 00:02:08.590 test-flow-perf: explicitly disabled via build config 00:02:08.590 test-gpudev: explicitly disabled via build config 00:02:08.590 test-mldev: explicitly disabled via build config 00:02:08.590 test-pipeline: explicitly disabled via build config 00:02:08.590 test-pmd: explicitly disabled via build config 00:02:08.590 test-regex: explicitly disabled via build config 00:02:08.590 test-sad: explicitly disabled via build config 00:02:08.590 test-security-perf: explicitly disabled via build config 00:02:08.590 00:02:08.590 libs: 00:02:08.590 argparse: explicitly disabled via build config 00:02:08.590 metrics: explicitly disabled via build config 00:02:08.590 acl: explicitly disabled via build config 00:02:08.590 bbdev: explicitly disabled via build config 00:02:08.590 bitratestats: explicitly disabled via build config 00:02:08.590 bpf: explicitly disabled via build config 00:02:08.590 cfgfile: explicitly disabled via build config 00:02:08.590 distributor: explicitly disabled via build config 00:02:08.590 efd: explicitly disabled via build config 00:02:08.590 eventdev: explicitly disabled via build config 00:02:08.590 dispatcher: explicitly disabled via build config 00:02:08.590 gpudev: explicitly disabled via build config 00:02:08.590 gro: explicitly disabled via build config 00:02:08.590 gso: explicitly disabled via build config 00:02:08.590 ip_frag: explicitly disabled via build config 00:02:08.590 jobstats: explicitly disabled via build config 00:02:08.590 latencystats: explicitly disabled via build config 00:02:08.590 lpm: explicitly disabled via build config 00:02:08.590 member: explicitly disabled via build config 00:02:08.590 pcapng: explicitly disabled via build config 00:02:08.590 rawdev: explicitly disabled via build config 00:02:08.590 regexdev: explicitly disabled via build config 00:02:08.590 mldev: explicitly disabled via build config 00:02:08.590 rib: explicitly disabled via build config 00:02:08.590 sched: explicitly disabled via build config 00:02:08.590 stack: explicitly disabled via build config 00:02:08.590 ipsec: explicitly disabled via build config 00:02:08.590 pdcp: explicitly disabled via build config 00:02:08.590 fib: explicitly disabled via build config 00:02:08.590 port: explicitly disabled via build config 00:02:08.590 pdump: explicitly disabled via build config 00:02:08.590 table: explicitly disabled via build config 00:02:08.590 pipeline: explicitly disabled via build config 00:02:08.590 graph: explicitly disabled via build config 00:02:08.590 node: explicitly disabled via build config 00:02:08.590 00:02:08.590 drivers: 00:02:08.590 common/cpt: not in enabled drivers build config 00:02:08.590 common/dpaax: not in enabled drivers build config 00:02:08.590 common/iavf: not in enabled drivers build config 00:02:08.590 common/idpf: not in enabled drivers build config 00:02:08.590 common/ionic: not in enabled drivers build config 00:02:08.590 common/mvep: not in enabled drivers build config 00:02:08.590 common/octeontx: not in enabled drivers build config 00:02:08.590 bus/auxiliary: not in enabled drivers build config 00:02:08.590 bus/cdx: not in enabled drivers build config 00:02:08.590 bus/dpaa: not in enabled drivers build config 00:02:08.590 bus/fslmc: not in enabled drivers build config 00:02:08.590 bus/ifpga: not in enabled drivers build config 00:02:08.590 bus/platform: not in enabled drivers build config 00:02:08.590 bus/uacce: not in enabled drivers build config 00:02:08.590 bus/vmbus: not in enabled drivers build config 00:02:08.590 common/cnxk: not in enabled drivers build config 00:02:08.590 common/mlx5: not in enabled drivers build config 00:02:08.590 common/nfp: not in enabled drivers build config 00:02:08.590 common/nitrox: not in enabled drivers build config 00:02:08.590 common/qat: not in enabled drivers build config 00:02:08.590 common/sfc_efx: not in enabled drivers build config 00:02:08.590 mempool/bucket: not in enabled drivers build config 00:02:08.590 mempool/cnxk: not in enabled drivers build config 00:02:08.590 mempool/dpaa: not in enabled drivers build config 00:02:08.590 mempool/dpaa2: not in enabled drivers build config 00:02:08.590 mempool/octeontx: not in enabled drivers build config 00:02:08.590 mempool/stack: not in enabled drivers build config 00:02:08.590 dma/cnxk: not in enabled drivers build config 00:02:08.590 dma/dpaa: not in enabled drivers build config 00:02:08.590 dma/dpaa2: not in enabled drivers build config 00:02:08.590 dma/hisilicon: not in enabled drivers build config 00:02:08.590 dma/idxd: not in enabled drivers build config 00:02:08.590 dma/ioat: not in enabled drivers build config 00:02:08.590 dma/skeleton: not in enabled drivers build config 00:02:08.590 net/af_packet: not in enabled drivers build config 00:02:08.590 net/af_xdp: not in enabled drivers build config 00:02:08.590 net/ark: not in enabled drivers build config 00:02:08.590 net/atlantic: not in enabled drivers build config 00:02:08.590 net/avp: not in enabled drivers build config 00:02:08.590 net/axgbe: not in enabled drivers build config 00:02:08.590 net/bnx2x: not in enabled drivers build config 00:02:08.590 net/bnxt: not in enabled drivers build config 00:02:08.590 net/bonding: not in enabled drivers build config 00:02:08.590 net/cnxk: not in enabled drivers build config 00:02:08.590 net/cpfl: not in enabled drivers build config 00:02:08.590 net/cxgbe: not in enabled drivers build config 00:02:08.590 net/dpaa: not in enabled drivers build config 00:02:08.590 net/dpaa2: not in enabled drivers build config 00:02:08.590 net/e1000: not in enabled drivers build config 00:02:08.590 net/ena: not in enabled drivers build config 00:02:08.590 net/enetc: not in enabled drivers build config 00:02:08.590 net/enetfec: not in enabled drivers build config 00:02:08.590 net/enic: not in enabled drivers build config 00:02:08.590 net/failsafe: not in enabled drivers build config 00:02:08.590 net/fm10k: not in enabled drivers build config 00:02:08.590 net/gve: not in enabled drivers build config 00:02:08.590 net/hinic: not in enabled drivers build config 00:02:08.590 net/hns3: not in enabled drivers build config 00:02:08.590 net/i40e: not in enabled drivers build config 00:02:08.590 net/iavf: not in enabled drivers build config 00:02:08.590 net/ice: not in enabled drivers build config 00:02:08.590 net/idpf: not in enabled drivers build config 00:02:08.590 net/igc: not in enabled drivers build config 00:02:08.590 net/ionic: not in enabled drivers build config 00:02:08.590 net/ipn3ke: not in enabled drivers build config 00:02:08.590 net/ixgbe: not in enabled drivers build config 00:02:08.590 net/mana: not in enabled drivers build config 00:02:08.590 net/memif: not in enabled drivers build config 00:02:08.590 net/mlx4: not in enabled drivers build config 00:02:08.590 net/mlx5: not in enabled drivers build config 00:02:08.590 net/mvneta: not in enabled drivers build config 00:02:08.590 net/mvpp2: not in enabled drivers build config 00:02:08.590 net/netvsc: not in enabled drivers build config 00:02:08.590 net/nfb: not in enabled drivers build config 00:02:08.590 net/nfp: not in enabled drivers build config 00:02:08.590 net/ngbe: not in enabled drivers build config 00:02:08.590 net/null: not in enabled drivers build config 00:02:08.590 net/octeontx: not in enabled drivers build config 00:02:08.590 net/octeon_ep: not in enabled drivers build config 00:02:08.590 net/pcap: not in enabled drivers build config 00:02:08.590 net/pfe: not in enabled drivers build config 00:02:08.590 net/qede: not in enabled drivers build config 00:02:08.590 net/ring: not in enabled drivers build config 00:02:08.590 net/sfc: not in enabled drivers build config 00:02:08.590 net/softnic: not in enabled drivers build config 00:02:08.590 net/tap: not in enabled drivers build config 00:02:08.590 net/thunderx: not in enabled drivers build config 00:02:08.590 net/txgbe: not in enabled drivers build config 00:02:08.590 net/vdev_netvsc: not in enabled drivers build config 00:02:08.591 net/vhost: not in enabled drivers build config 00:02:08.591 net/virtio: not in enabled drivers build config 00:02:08.591 net/vmxnet3: not in enabled drivers build config 00:02:08.591 raw/*: missing internal dependency, "rawdev" 00:02:08.591 crypto/armv8: not in enabled drivers build config 00:02:08.591 crypto/bcmfs: not in enabled drivers build config 00:02:08.591 crypto/caam_jr: not in enabled drivers build config 00:02:08.591 crypto/ccp: not in enabled drivers build config 00:02:08.591 crypto/cnxk: not in enabled drivers build config 00:02:08.591 crypto/dpaa_sec: not in enabled drivers build config 00:02:08.591 crypto/dpaa2_sec: not in enabled drivers build config 00:02:08.591 crypto/ipsec_mb: not in enabled drivers build config 00:02:08.591 crypto/mlx5: not in enabled drivers build config 00:02:08.591 crypto/mvsam: not in enabled drivers build config 00:02:08.591 crypto/nitrox: not in enabled drivers build config 00:02:08.591 crypto/null: not in enabled drivers build config 00:02:08.591 crypto/octeontx: not in enabled drivers build config 00:02:08.591 crypto/openssl: not in enabled drivers build config 00:02:08.591 crypto/scheduler: not in enabled drivers build config 00:02:08.591 crypto/uadk: not in enabled drivers build config 00:02:08.591 crypto/virtio: not in enabled drivers build config 00:02:08.591 compress/isal: not in enabled drivers build config 00:02:08.591 compress/mlx5: not in enabled drivers build config 00:02:08.591 compress/nitrox: not in enabled drivers build config 00:02:08.591 compress/octeontx: not in enabled drivers build config 00:02:08.591 compress/zlib: not in enabled drivers build config 00:02:08.591 regex/*: missing internal dependency, "regexdev" 00:02:08.591 ml/*: missing internal dependency, "mldev" 00:02:08.591 vdpa/ifc: not in enabled drivers build config 00:02:08.591 vdpa/mlx5: not in enabled drivers build config 00:02:08.591 vdpa/nfp: not in enabled drivers build config 00:02:08.591 vdpa/sfc: not in enabled drivers build config 00:02:08.591 event/*: missing internal dependency, "eventdev" 00:02:08.591 baseband/*: missing internal dependency, "bbdev" 00:02:08.591 gpu/*: missing internal dependency, "gpudev" 00:02:08.591 00:02:08.591 00:02:08.591 Build targets in project: 85 00:02:08.591 00:02:08.591 DPDK 24.03.0 00:02:08.591 00:02:08.591 User defined options 00:02:08.591 buildtype : debug 00:02:08.591 default_library : shared 00:02:08.591 libdir : lib 00:02:08.591 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:08.591 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:08.591 c_link_args : 00:02:08.591 cpu_instruction_set: native 00:02:08.591 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:08.591 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:08.591 enable_docs : false 00:02:08.591 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:08.591 enable_kmods : false 00:02:08.591 max_lcores : 128 00:02:08.591 tests : false 00:02:08.591 00:02:08.591 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:08.850 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:08.850 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:09.110 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:09.110 [3/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:09.110 [4/268] Linking static target lib/librte_kvargs.a 00:02:09.110 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:09.110 [6/268] Linking static target lib/librte_log.a 00:02:09.678 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:09.678 [8/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.678 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:09.678 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:09.938 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:09.938 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:09.938 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:09.938 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:09.938 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:09.938 [16/268] Linking static target lib/librte_telemetry.a 00:02:10.196 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:10.196 [18/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.196 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:10.196 [20/268] Linking target lib/librte_log.so.24.1 00:02:10.455 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:10.455 [22/268] Linking target lib/librte_kvargs.so.24.1 00:02:10.455 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:10.715 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:10.715 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:10.715 [26/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:10.715 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:10.715 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:10.974 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:10.974 [30/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.974 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:10.974 [32/268] Linking target lib/librte_telemetry.so.24.1 00:02:10.974 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:10.974 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:11.233 [35/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:11.233 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:11.493 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:11.753 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:11.753 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:11.753 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:11.753 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:11.753 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:11.753 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:11.753 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:12.012 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:12.012 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:12.012 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:12.012 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:12.272 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:12.272 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:12.534 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:12.534 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:12.793 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:12.793 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:12.793 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:12.793 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:12.793 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:12.793 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:13.052 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:13.052 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:13.310 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:13.310 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:13.568 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:13.568 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:13.568 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:13.826 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:13.826 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:13.826 [68/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:13.826 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:14.084 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:14.084 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:14.084 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:14.084 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:14.084 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:14.084 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:14.350 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:14.350 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:14.350 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:14.350 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:14.610 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:14.610 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:14.610 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:14.610 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:14.868 [84/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:14.868 [85/268] Linking static target lib/librte_rcu.a 00:02:14.868 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:15.125 [87/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:15.126 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:15.126 [89/268] Linking static target lib/librte_eal.a 00:02:15.126 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:15.126 [91/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:15.126 [92/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:15.383 [93/268] Linking static target lib/librte_ring.a 00:02:15.383 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:15.383 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:15.383 [96/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:15.383 [97/268] Linking static target lib/librte_mempool.a 00:02:15.383 [98/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:15.383 [99/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:15.383 [100/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.948 [101/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.948 [102/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:15.948 [103/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:15.948 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:15.948 [105/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:15.948 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:15.948 [107/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:15.948 [108/268] Linking static target lib/librte_net.a 00:02:15.948 [109/268] Linking static target lib/librte_mbuf.a 00:02:16.206 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:16.206 [111/268] Linking static target lib/librte_meter.a 00:02:16.463 [112/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.463 [113/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.463 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:16.463 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:16.721 [116/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.721 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:16.721 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:17.289 [119/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.289 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:17.289 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:17.548 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:17.806 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:17.807 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:17.807 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:17.807 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:17.807 [127/268] Linking static target lib/librte_pci.a 00:02:17.807 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:18.065 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:18.065 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:18.065 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:18.065 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:18.324 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:18.324 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:18.324 [135/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.324 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:18.324 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:18.324 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:18.324 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:18.324 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:18.324 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:18.583 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:18.583 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:18.583 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:18.583 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:18.583 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:18.583 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:18.583 [148/268] Linking static target lib/librte_cmdline.a 00:02:18.842 [149/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:18.842 [150/268] Linking static target lib/librte_ethdev.a 00:02:19.100 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:19.100 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:19.100 [153/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:19.100 [154/268] Linking static target lib/librte_timer.a 00:02:19.100 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:19.358 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:19.358 [157/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:19.617 [158/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:19.617 [159/268] Linking static target lib/librte_hash.a 00:02:19.617 [160/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:19.876 [161/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:19.876 [162/268] Linking static target lib/librte_compressdev.a 00:02:19.876 [163/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.876 [164/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:19.876 [165/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:20.135 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:20.135 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:20.394 [168/268] Linking static target lib/librte_dmadev.a 00:02:20.394 [169/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:20.394 [170/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.394 [171/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:20.394 [172/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:20.653 [173/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:20.653 [174/268] Linking static target lib/librte_cryptodev.a 00:02:20.653 [175/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:20.912 [176/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.912 [177/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.912 [178/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:21.171 [179/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:21.171 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:21.171 [181/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.171 [182/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:21.171 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:21.429 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:21.688 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:21.688 [186/268] Linking static target lib/librte_power.a 00:02:21.947 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:21.947 [188/268] Linking static target lib/librte_reorder.a 00:02:21.947 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:21.947 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:22.206 [191/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:22.206 [192/268] Linking static target lib/librte_security.a 00:02:22.206 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:22.466 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:22.466 [195/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.033 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.033 [197/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.033 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:23.033 [199/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:23.033 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:23.292 [201/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.292 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:23.551 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:23.552 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:23.552 [205/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:23.552 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:23.810 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:23.810 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:23.810 [209/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:23.810 [210/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:24.070 [211/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:24.070 [212/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:24.070 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:24.070 [214/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:24.070 [215/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:24.336 [216/268] Linking static target drivers/librte_bus_pci.a 00:02:24.336 [217/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:24.336 [218/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:24.336 [219/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:24.336 [220/268] Linking static target drivers/librte_bus_vdev.a 00:02:24.336 [221/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:24.336 [222/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:24.336 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:24.609 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:24.609 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:24.609 [226/268] Linking static target drivers/librte_mempool_ring.a 00:02:24.609 [227/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.609 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.547 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:25.547 [230/268] Linking static target lib/librte_vhost.a 00:02:26.118 [231/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.118 [232/268] Linking target lib/librte_eal.so.24.1 00:02:26.118 [233/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:26.118 [234/268] Linking target lib/librte_meter.so.24.1 00:02:26.118 [235/268] Linking target lib/librte_dmadev.so.24.1 00:02:26.118 [236/268] Linking target lib/librte_pci.so.24.1 00:02:26.118 [237/268] Linking target lib/librte_ring.so.24.1 00:02:26.118 [238/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:26.118 [239/268] Linking target lib/librte_timer.so.24.1 00:02:26.376 [240/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:26.376 [241/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:26.376 [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:26.376 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:26.376 [244/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:26.376 [245/268] Linking target lib/librte_rcu.so.24.1 00:02:26.376 [246/268] Linking target lib/librte_mempool.so.24.1 00:02:26.376 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:26.636 [248/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:26.636 [249/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:26.636 [250/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.636 [251/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:26.636 [252/268] Linking target lib/librte_mbuf.so.24.1 00:02:26.636 [253/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.636 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:26.895 [255/268] Linking target lib/librte_compressdev.so.24.1 00:02:26.895 [256/268] Linking target lib/librte_reorder.so.24.1 00:02:26.895 [257/268] Linking target lib/librte_net.so.24.1 00:02:26.895 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:02:26.895 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:26.895 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:26.895 [261/268] Linking target lib/librte_cmdline.so.24.1 00:02:26.895 [262/268] Linking target lib/librte_hash.so.24.1 00:02:26.895 [263/268] Linking target lib/librte_security.so.24.1 00:02:26.895 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:27.153 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:27.153 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:27.153 [267/268] Linking target lib/librte_power.so.24.1 00:02:27.153 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:27.153 INFO: autodetecting backend as ninja 00:02:27.153 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:53.719 CC lib/ut/ut.o 00:02:53.719 CC lib/log/log_deprecated.o 00:02:53.719 CC lib/log/log.o 00:02:53.719 CC lib/log/log_flags.o 00:02:53.720 CC lib/ut_mock/mock.o 00:02:53.720 LIB libspdk_ut.a 00:02:53.720 LIB libspdk_log.a 00:02:53.720 LIB libspdk_ut_mock.a 00:02:53.720 SO libspdk_ut.so.2.0 00:02:53.720 SO libspdk_ut_mock.so.6.0 00:02:53.720 SO libspdk_log.so.7.1 00:02:53.720 SYMLINK libspdk_ut_mock.so 00:02:53.720 SYMLINK libspdk_ut.so 00:02:53.720 SYMLINK libspdk_log.so 00:02:53.720 CC lib/dma/dma.o 00:02:53.720 CXX lib/trace_parser/trace.o 00:02:53.720 CC lib/util/bit_array.o 00:02:53.720 CC lib/util/base64.o 00:02:53.720 CC lib/util/cpuset.o 00:02:53.720 CC lib/util/crc16.o 00:02:53.720 CC lib/util/crc32.o 00:02:53.720 CC lib/util/crc32c.o 00:02:53.720 CC lib/ioat/ioat.o 00:02:53.720 CC lib/vfio_user/host/vfio_user_pci.o 00:02:53.720 CC lib/util/crc32_ieee.o 00:02:53.720 CC lib/vfio_user/host/vfio_user.o 00:02:53.720 CC lib/util/crc64.o 00:02:53.720 CC lib/util/dif.o 00:02:53.720 CC lib/util/fd.o 00:02:53.720 LIB libspdk_dma.a 00:02:53.720 CC lib/util/fd_group.o 00:02:53.720 SO libspdk_dma.so.5.0 00:02:53.720 LIB libspdk_ioat.a 00:02:53.720 SO libspdk_ioat.so.7.0 00:02:53.720 SYMLINK libspdk_dma.so 00:02:53.720 CC lib/util/file.o 00:02:53.720 CC lib/util/hexlify.o 00:02:53.720 CC lib/util/iov.o 00:02:53.720 CC lib/util/math.o 00:02:53.720 SYMLINK libspdk_ioat.so 00:02:53.720 CC lib/util/net.o 00:02:53.720 CC lib/util/pipe.o 00:02:53.720 LIB libspdk_vfio_user.a 00:02:53.720 SO libspdk_vfio_user.so.5.0 00:02:53.720 CC lib/util/strerror_tls.o 00:02:53.720 CC lib/util/string.o 00:02:53.720 SYMLINK libspdk_vfio_user.so 00:02:53.720 CC lib/util/uuid.o 00:02:53.720 CC lib/util/xor.o 00:02:53.720 CC lib/util/zipf.o 00:02:53.720 CC lib/util/md5.o 00:02:53.720 LIB libspdk_util.a 00:02:53.720 SO libspdk_util.so.10.1 00:02:53.720 LIB libspdk_trace_parser.a 00:02:53.720 SO libspdk_trace_parser.so.6.0 00:02:53.720 SYMLINK libspdk_util.so 00:02:53.720 SYMLINK libspdk_trace_parser.so 00:02:53.720 CC lib/idxd/idxd.o 00:02:53.720 CC lib/json/json_parse.o 00:02:53.720 CC lib/idxd/idxd_kernel.o 00:02:53.720 CC lib/json/json_util.o 00:02:53.720 CC lib/env_dpdk/env.o 00:02:53.720 CC lib/idxd/idxd_user.o 00:02:53.720 CC lib/json/json_write.o 00:02:53.720 CC lib/conf/conf.o 00:02:53.720 CC lib/vmd/vmd.o 00:02:53.720 CC lib/rdma_utils/rdma_utils.o 00:02:53.720 CC lib/vmd/led.o 00:02:53.720 CC lib/env_dpdk/memory.o 00:02:53.720 CC lib/env_dpdk/pci.o 00:02:53.720 LIB libspdk_conf.a 00:02:53.720 SO libspdk_conf.so.6.0 00:02:53.720 LIB libspdk_json.a 00:02:53.720 LIB libspdk_rdma_utils.a 00:02:53.720 CC lib/env_dpdk/init.o 00:02:53.720 SO libspdk_json.so.6.0 00:02:53.720 SYMLINK libspdk_conf.so 00:02:53.720 SO libspdk_rdma_utils.so.1.0 00:02:53.720 CC lib/env_dpdk/threads.o 00:02:53.720 CC lib/env_dpdk/pci_ioat.o 00:02:53.720 SYMLINK libspdk_json.so 00:02:53.720 SYMLINK libspdk_rdma_utils.so 00:02:53.720 CC lib/env_dpdk/pci_virtio.o 00:02:53.720 CC lib/env_dpdk/pci_vmd.o 00:02:53.720 CC lib/env_dpdk/pci_idxd.o 00:02:53.720 CC lib/env_dpdk/pci_event.o 00:02:53.720 CC lib/jsonrpc/jsonrpc_server.o 00:02:53.720 LIB libspdk_idxd.a 00:02:53.720 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:53.720 SO libspdk_idxd.so.12.1 00:02:53.720 LIB libspdk_vmd.a 00:02:53.720 CC lib/env_dpdk/sigbus_handler.o 00:02:53.720 SYMLINK libspdk_idxd.so 00:02:53.720 SO libspdk_vmd.so.6.0 00:02:53.720 CC lib/env_dpdk/pci_dpdk.o 00:02:53.720 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:53.720 CC lib/jsonrpc/jsonrpc_client.o 00:02:53.720 SYMLINK libspdk_vmd.so 00:02:53.720 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:53.980 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:53.980 CC lib/rdma_provider/common.o 00:02:53.980 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:53.980 LIB libspdk_jsonrpc.a 00:02:54.240 SO libspdk_jsonrpc.so.6.0 00:02:54.240 LIB libspdk_rdma_provider.a 00:02:54.240 SYMLINK libspdk_jsonrpc.so 00:02:54.240 SO libspdk_rdma_provider.so.7.0 00:02:54.240 SYMLINK libspdk_rdma_provider.so 00:02:54.498 CC lib/rpc/rpc.o 00:02:54.498 LIB libspdk_env_dpdk.a 00:02:54.498 SO libspdk_env_dpdk.so.15.1 00:02:54.756 LIB libspdk_rpc.a 00:02:54.756 SO libspdk_rpc.so.6.0 00:02:54.756 SYMLINK libspdk_env_dpdk.so 00:02:54.756 SYMLINK libspdk_rpc.so 00:02:55.015 CC lib/trace/trace.o 00:02:55.015 CC lib/trace/trace_flags.o 00:02:55.015 CC lib/trace/trace_rpc.o 00:02:55.015 CC lib/notify/notify_rpc.o 00:02:55.015 CC lib/notify/notify.o 00:02:55.015 CC lib/keyring/keyring.o 00:02:55.015 CC lib/keyring/keyring_rpc.o 00:02:55.274 LIB libspdk_notify.a 00:02:55.274 SO libspdk_notify.so.6.0 00:02:55.274 LIB libspdk_keyring.a 00:02:55.274 SO libspdk_keyring.so.2.0 00:02:55.274 LIB libspdk_trace.a 00:02:55.274 SYMLINK libspdk_notify.so 00:02:55.274 SO libspdk_trace.so.11.0 00:02:55.533 SYMLINK libspdk_keyring.so 00:02:55.533 SYMLINK libspdk_trace.so 00:02:55.793 CC lib/thread/thread.o 00:02:55.793 CC lib/thread/iobuf.o 00:02:55.793 CC lib/sock/sock.o 00:02:55.793 CC lib/sock/sock_rpc.o 00:02:56.359 LIB libspdk_sock.a 00:02:56.359 SO libspdk_sock.so.10.0 00:02:56.359 SYMLINK libspdk_sock.so 00:02:56.619 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:56.619 CC lib/nvme/nvme_ctrlr.o 00:02:56.619 CC lib/nvme/nvme_fabric.o 00:02:56.619 CC lib/nvme/nvme_ns_cmd.o 00:02:56.619 CC lib/nvme/nvme_pcie_common.o 00:02:56.619 CC lib/nvme/nvme_ns.o 00:02:56.619 CC lib/nvme/nvme_qpair.o 00:02:56.619 CC lib/nvme/nvme_pcie.o 00:02:56.619 CC lib/nvme/nvme.o 00:02:57.553 LIB libspdk_thread.a 00:02:57.553 SO libspdk_thread.so.11.0 00:02:57.553 CC lib/nvme/nvme_quirks.o 00:02:57.553 CC lib/nvme/nvme_transport.o 00:02:57.553 CC lib/nvme/nvme_discovery.o 00:02:57.553 SYMLINK libspdk_thread.so 00:02:57.553 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:57.553 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:57.553 CC lib/nvme/nvme_tcp.o 00:02:57.553 CC lib/nvme/nvme_opal.o 00:02:57.810 CC lib/nvme/nvme_io_msg.o 00:02:57.810 CC lib/nvme/nvme_poll_group.o 00:02:58.068 CC lib/nvme/nvme_zns.o 00:02:58.068 CC lib/nvme/nvme_stubs.o 00:02:58.326 CC lib/nvme/nvme_auth.o 00:02:58.326 CC lib/accel/accel.o 00:02:58.326 CC lib/accel/accel_rpc.o 00:02:58.326 CC lib/blob/blobstore.o 00:02:58.326 CC lib/blob/request.o 00:02:58.585 CC lib/blob/zeroes.o 00:02:58.585 CC lib/accel/accel_sw.o 00:02:58.585 CC lib/blob/blob_bs_dev.o 00:02:58.843 CC lib/nvme/nvme_cuse.o 00:02:58.843 CC lib/init/json_config.o 00:02:58.843 CC lib/nvme/nvme_rdma.o 00:02:59.100 CC lib/virtio/virtio.o 00:02:59.100 CC lib/virtio/virtio_vhost_user.o 00:02:59.100 CC lib/virtio/virtio_vfio_user.o 00:02:59.100 CC lib/init/subsystem.o 00:02:59.100 CC lib/fsdev/fsdev.o 00:02:59.100 CC lib/init/subsystem_rpc.o 00:02:59.360 CC lib/init/rpc.o 00:02:59.360 CC lib/fsdev/fsdev_io.o 00:02:59.360 CC lib/fsdev/fsdev_rpc.o 00:02:59.360 CC lib/virtio/virtio_pci.o 00:02:59.360 LIB libspdk_accel.a 00:02:59.360 SO libspdk_accel.so.16.0 00:02:59.360 LIB libspdk_init.a 00:02:59.666 SYMLINK libspdk_accel.so 00:02:59.666 SO libspdk_init.so.6.0 00:02:59.666 SYMLINK libspdk_init.so 00:02:59.666 LIB libspdk_virtio.a 00:02:59.666 CC lib/bdev/bdev.o 00:02:59.666 CC lib/bdev/bdev_rpc.o 00:02:59.666 CC lib/bdev/bdev_zone.o 00:02:59.666 CC lib/bdev/part.o 00:02:59.666 CC lib/bdev/scsi_nvme.o 00:02:59.666 SO libspdk_virtio.so.7.0 00:02:59.666 CC lib/event/app.o 00:02:59.924 SYMLINK libspdk_virtio.so 00:02:59.924 CC lib/event/reactor.o 00:02:59.924 LIB libspdk_fsdev.a 00:02:59.924 SO libspdk_fsdev.so.2.0 00:02:59.924 CC lib/event/log_rpc.o 00:02:59.924 CC lib/event/app_rpc.o 00:02:59.924 SYMLINK libspdk_fsdev.so 00:02:59.924 CC lib/event/scheduler_static.o 00:03:00.182 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:00.182 LIB libspdk_event.a 00:03:00.182 SO libspdk_event.so.14.0 00:03:00.441 LIB libspdk_nvme.a 00:03:00.441 SYMLINK libspdk_event.so 00:03:00.441 SO libspdk_nvme.so.15.0 00:03:00.700 LIB libspdk_fuse_dispatcher.a 00:03:00.700 SYMLINK libspdk_nvme.so 00:03:00.700 SO libspdk_fuse_dispatcher.so.1.0 00:03:00.959 SYMLINK libspdk_fuse_dispatcher.so 00:03:01.218 LIB libspdk_blob.a 00:03:01.477 SO libspdk_blob.so.12.0 00:03:01.477 SYMLINK libspdk_blob.so 00:03:01.737 CC lib/blobfs/blobfs.o 00:03:01.737 CC lib/blobfs/tree.o 00:03:01.737 CC lib/lvol/lvol.o 00:03:02.305 LIB libspdk_bdev.a 00:03:02.305 SO libspdk_bdev.so.17.0 00:03:02.565 SYMLINK libspdk_bdev.so 00:03:02.565 LIB libspdk_blobfs.a 00:03:02.565 SO libspdk_blobfs.so.11.0 00:03:02.824 CC lib/nvmf/ctrlr.o 00:03:02.824 CC lib/nvmf/ctrlr_discovery.o 00:03:02.824 CC lib/nvmf/ctrlr_bdev.o 00:03:02.824 CC lib/nbd/nbd.o 00:03:02.824 CC lib/nvmf/subsystem.o 00:03:02.824 CC lib/ublk/ublk.o 00:03:02.824 CC lib/scsi/dev.o 00:03:02.824 LIB libspdk_lvol.a 00:03:02.824 CC lib/ftl/ftl_core.o 00:03:02.824 SYMLINK libspdk_blobfs.so 00:03:02.824 CC lib/ublk/ublk_rpc.o 00:03:02.824 SO libspdk_lvol.so.11.0 00:03:02.825 SYMLINK libspdk_lvol.so 00:03:02.825 CC lib/scsi/lun.o 00:03:02.825 CC lib/scsi/port.o 00:03:03.084 CC lib/scsi/scsi.o 00:03:03.084 CC lib/ftl/ftl_init.o 00:03:03.084 CC lib/ftl/ftl_layout.o 00:03:03.084 CC lib/nbd/nbd_rpc.o 00:03:03.084 CC lib/ftl/ftl_debug.o 00:03:03.084 CC lib/scsi/scsi_bdev.o 00:03:03.344 LIB libspdk_nbd.a 00:03:03.344 CC lib/nvmf/nvmf.o 00:03:03.344 CC lib/ftl/ftl_io.o 00:03:03.344 SO libspdk_nbd.so.7.0 00:03:03.344 CC lib/nvmf/nvmf_rpc.o 00:03:03.344 SYMLINK libspdk_nbd.so 00:03:03.344 CC lib/nvmf/transport.o 00:03:03.344 LIB libspdk_ublk.a 00:03:03.344 CC lib/ftl/ftl_sb.o 00:03:03.344 SO libspdk_ublk.so.3.0 00:03:03.344 CC lib/nvmf/tcp.o 00:03:03.604 SYMLINK libspdk_ublk.so 00:03:03.604 CC lib/nvmf/stubs.o 00:03:03.604 CC lib/nvmf/mdns_server.o 00:03:03.604 CC lib/ftl/ftl_l2p.o 00:03:03.604 CC lib/scsi/scsi_pr.o 00:03:03.863 CC lib/ftl/ftl_l2p_flat.o 00:03:03.863 CC lib/nvmf/rdma.o 00:03:03.863 CC lib/scsi/scsi_rpc.o 00:03:03.863 CC lib/scsi/task.o 00:03:04.123 CC lib/ftl/ftl_nv_cache.o 00:03:04.123 CC lib/ftl/ftl_band.o 00:03:04.123 CC lib/nvmf/auth.o 00:03:04.123 CC lib/ftl/ftl_band_ops.o 00:03:04.123 CC lib/ftl/ftl_writer.o 00:03:04.123 LIB libspdk_scsi.a 00:03:04.123 CC lib/ftl/ftl_rq.o 00:03:04.382 SO libspdk_scsi.so.9.0 00:03:04.382 SYMLINK libspdk_scsi.so 00:03:04.382 CC lib/ftl/ftl_reloc.o 00:03:04.382 CC lib/ftl/ftl_l2p_cache.o 00:03:04.382 CC lib/ftl/ftl_p2l.o 00:03:04.382 CC lib/ftl/ftl_p2l_log.o 00:03:04.641 CC lib/iscsi/conn.o 00:03:04.641 CC lib/vhost/vhost.o 00:03:04.641 CC lib/ftl/mngt/ftl_mngt.o 00:03:04.900 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:04.900 CC lib/iscsi/init_grp.o 00:03:04.900 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:04.900 CC lib/iscsi/iscsi.o 00:03:04.900 CC lib/vhost/vhost_rpc.o 00:03:04.900 CC lib/vhost/vhost_scsi.o 00:03:05.160 CC lib/iscsi/param.o 00:03:05.160 CC lib/iscsi/portal_grp.o 00:03:05.160 CC lib/iscsi/tgt_node.o 00:03:05.160 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:05.160 CC lib/iscsi/iscsi_subsystem.o 00:03:05.419 CC lib/iscsi/iscsi_rpc.o 00:03:05.419 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:05.419 CC lib/iscsi/task.o 00:03:05.677 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:05.677 CC lib/vhost/vhost_blk.o 00:03:05.677 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:05.677 CC lib/vhost/rte_vhost_user.o 00:03:05.677 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:05.677 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:05.936 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:05.936 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:05.936 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:05.936 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:05.936 LIB libspdk_nvmf.a 00:03:05.936 CC lib/ftl/utils/ftl_conf.o 00:03:05.936 CC lib/ftl/utils/ftl_md.o 00:03:06.195 SO libspdk_nvmf.so.20.0 00:03:06.195 CC lib/ftl/utils/ftl_mempool.o 00:03:06.195 CC lib/ftl/utils/ftl_bitmap.o 00:03:06.195 CC lib/ftl/utils/ftl_property.o 00:03:06.195 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:06.195 SYMLINK libspdk_nvmf.so 00:03:06.195 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:06.195 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:06.195 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:06.453 LIB libspdk_iscsi.a 00:03:06.453 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:06.453 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:06.453 SO libspdk_iscsi.so.8.0 00:03:06.453 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:06.453 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:06.453 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:06.453 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:06.453 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:06.713 SYMLINK libspdk_iscsi.so 00:03:06.713 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:06.713 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:06.713 CC lib/ftl/base/ftl_base_dev.o 00:03:06.713 CC lib/ftl/base/ftl_base_bdev.o 00:03:06.713 CC lib/ftl/ftl_trace.o 00:03:06.713 LIB libspdk_vhost.a 00:03:06.972 SO libspdk_vhost.so.8.0 00:03:06.972 SYMLINK libspdk_vhost.so 00:03:06.972 LIB libspdk_ftl.a 00:03:07.231 SO libspdk_ftl.so.9.0 00:03:07.491 SYMLINK libspdk_ftl.so 00:03:07.750 CC module/env_dpdk/env_dpdk_rpc.o 00:03:08.009 CC module/sock/posix/posix.o 00:03:08.009 CC module/accel/error/accel_error.o 00:03:08.009 CC module/scheduler/gscheduler/gscheduler.o 00:03:08.009 CC module/sock/uring/uring.o 00:03:08.009 CC module/blob/bdev/blob_bdev.o 00:03:08.009 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:08.009 CC module/fsdev/aio/fsdev_aio.o 00:03:08.009 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:08.009 CC module/keyring/file/keyring.o 00:03:08.009 LIB libspdk_env_dpdk_rpc.a 00:03:08.009 SO libspdk_env_dpdk_rpc.so.6.0 00:03:08.009 SYMLINK libspdk_env_dpdk_rpc.so 00:03:08.009 LIB libspdk_scheduler_gscheduler.a 00:03:08.009 CC module/keyring/file/keyring_rpc.o 00:03:08.009 LIB libspdk_scheduler_dpdk_governor.a 00:03:08.009 SO libspdk_scheduler_gscheduler.so.4.0 00:03:08.009 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:08.009 LIB libspdk_scheduler_dynamic.a 00:03:08.009 CC module/accel/error/accel_error_rpc.o 00:03:08.269 SO libspdk_scheduler_dynamic.so.4.0 00:03:08.269 SYMLINK libspdk_scheduler_gscheduler.so 00:03:08.269 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:08.269 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:08.269 LIB libspdk_blob_bdev.a 00:03:08.269 SYMLINK libspdk_scheduler_dynamic.so 00:03:08.269 SO libspdk_blob_bdev.so.12.0 00:03:08.269 LIB libspdk_keyring_file.a 00:03:08.269 SO libspdk_keyring_file.so.2.0 00:03:08.269 SYMLINK libspdk_blob_bdev.so 00:03:08.269 LIB libspdk_accel_error.a 00:03:08.269 CC module/keyring/linux/keyring.o 00:03:08.269 CC module/keyring/linux/keyring_rpc.o 00:03:08.269 SO libspdk_accel_error.so.2.0 00:03:08.269 SYMLINK libspdk_keyring_file.so 00:03:08.269 CC module/fsdev/aio/linux_aio_mgr.o 00:03:08.269 CC module/accel/ioat/accel_ioat.o 00:03:08.529 SYMLINK libspdk_accel_error.so 00:03:08.529 CC module/accel/dsa/accel_dsa.o 00:03:08.529 CC module/accel/ioat/accel_ioat_rpc.o 00:03:08.529 LIB libspdk_keyring_linux.a 00:03:08.529 SO libspdk_keyring_linux.so.1.0 00:03:08.529 CC module/accel/iaa/accel_iaa.o 00:03:08.529 LIB libspdk_fsdev_aio.a 00:03:08.529 CC module/accel/iaa/accel_iaa_rpc.o 00:03:08.529 LIB libspdk_sock_uring.a 00:03:08.529 SYMLINK libspdk_keyring_linux.so 00:03:08.529 SO libspdk_fsdev_aio.so.1.0 00:03:08.529 LIB libspdk_sock_posix.a 00:03:08.788 LIB libspdk_accel_ioat.a 00:03:08.788 SO libspdk_sock_uring.so.5.0 00:03:08.788 CC module/bdev/delay/vbdev_delay.o 00:03:08.788 SO libspdk_sock_posix.so.6.0 00:03:08.788 SO libspdk_accel_ioat.so.6.0 00:03:08.788 SYMLINK libspdk_sock_uring.so 00:03:08.788 SYMLINK libspdk_fsdev_aio.so 00:03:08.788 CC module/accel/dsa/accel_dsa_rpc.o 00:03:08.788 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:08.788 SYMLINK libspdk_accel_ioat.so 00:03:08.788 CC module/bdev/error/vbdev_error.o 00:03:08.788 SYMLINK libspdk_sock_posix.so 00:03:08.788 CC module/bdev/gpt/gpt.o 00:03:08.788 CC module/bdev/error/vbdev_error_rpc.o 00:03:08.788 LIB libspdk_accel_iaa.a 00:03:08.788 SO libspdk_accel_iaa.so.3.0 00:03:08.788 LIB libspdk_accel_dsa.a 00:03:08.788 CC module/bdev/lvol/vbdev_lvol.o 00:03:09.047 SYMLINK libspdk_accel_iaa.so 00:03:09.047 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:09.047 SO libspdk_accel_dsa.so.5.0 00:03:09.047 CC module/bdev/malloc/bdev_malloc.o 00:03:09.047 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:09.047 CC module/bdev/gpt/vbdev_gpt.o 00:03:09.047 CC module/blobfs/bdev/blobfs_bdev.o 00:03:09.047 SYMLINK libspdk_accel_dsa.so 00:03:09.047 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:09.047 LIB libspdk_bdev_error.a 00:03:09.047 LIB libspdk_bdev_delay.a 00:03:09.047 SO libspdk_bdev_error.so.6.0 00:03:09.047 SO libspdk_bdev_delay.so.6.0 00:03:09.047 CC module/bdev/null/bdev_null.o 00:03:09.047 SYMLINK libspdk_bdev_error.so 00:03:09.307 SYMLINK libspdk_bdev_delay.so 00:03:09.307 LIB libspdk_blobfs_bdev.a 00:03:09.307 CC module/bdev/null/bdev_null_rpc.o 00:03:09.307 SO libspdk_blobfs_bdev.so.6.0 00:03:09.307 LIB libspdk_bdev_gpt.a 00:03:09.307 SO libspdk_bdev_gpt.so.6.0 00:03:09.307 SYMLINK libspdk_blobfs_bdev.so 00:03:09.307 CC module/bdev/passthru/vbdev_passthru.o 00:03:09.307 LIB libspdk_bdev_malloc.a 00:03:09.307 CC module/bdev/nvme/bdev_nvme.o 00:03:09.307 CC module/bdev/raid/bdev_raid.o 00:03:09.307 SYMLINK libspdk_bdev_gpt.so 00:03:09.307 CC module/bdev/raid/bdev_raid_rpc.o 00:03:09.307 SO libspdk_bdev_malloc.so.6.0 00:03:09.307 CC module/bdev/raid/bdev_raid_sb.o 00:03:09.307 LIB libspdk_bdev_lvol.a 00:03:09.566 LIB libspdk_bdev_null.a 00:03:09.566 SO libspdk_bdev_lvol.so.6.0 00:03:09.566 CC module/bdev/split/vbdev_split.o 00:03:09.566 SYMLINK libspdk_bdev_malloc.so 00:03:09.566 CC module/bdev/raid/raid0.o 00:03:09.566 SO libspdk_bdev_null.so.6.0 00:03:09.566 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:09.566 SYMLINK libspdk_bdev_lvol.so 00:03:09.566 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:09.566 SYMLINK libspdk_bdev_null.so 00:03:09.566 CC module/bdev/raid/raid1.o 00:03:09.566 CC module/bdev/split/vbdev_split_rpc.o 00:03:09.566 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:09.825 CC module/bdev/raid/concat.o 00:03:09.825 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:09.825 LIB libspdk_bdev_split.a 00:03:09.825 SO libspdk_bdev_split.so.6.0 00:03:09.825 LIB libspdk_bdev_passthru.a 00:03:09.825 LIB libspdk_bdev_zone_block.a 00:03:09.825 CC module/bdev/nvme/nvme_rpc.o 00:03:09.825 CC module/bdev/uring/bdev_uring.o 00:03:09.825 SYMLINK libspdk_bdev_split.so 00:03:09.825 CC module/bdev/uring/bdev_uring_rpc.o 00:03:09.825 SO libspdk_bdev_passthru.so.6.0 00:03:09.825 SO libspdk_bdev_zone_block.so.6.0 00:03:09.825 CC module/bdev/aio/bdev_aio.o 00:03:10.084 SYMLINK libspdk_bdev_passthru.so 00:03:10.084 CC module/bdev/nvme/bdev_mdns_client.o 00:03:10.084 CC module/bdev/nvme/vbdev_opal.o 00:03:10.084 SYMLINK libspdk_bdev_zone_block.so 00:03:10.084 CC module/bdev/aio/bdev_aio_rpc.o 00:03:10.084 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:10.084 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:10.342 LIB libspdk_bdev_uring.a 00:03:10.342 SO libspdk_bdev_uring.so.6.0 00:03:10.342 LIB libspdk_bdev_aio.a 00:03:10.342 CC module/bdev/ftl/bdev_ftl.o 00:03:10.342 SO libspdk_bdev_aio.so.6.0 00:03:10.342 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:10.342 SYMLINK libspdk_bdev_uring.so 00:03:10.342 LIB libspdk_bdev_raid.a 00:03:10.342 SYMLINK libspdk_bdev_aio.so 00:03:10.342 SO libspdk_bdev_raid.so.6.0 00:03:10.342 CC module/bdev/iscsi/bdev_iscsi.o 00:03:10.342 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:10.342 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:10.342 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:10.342 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:10.600 SYMLINK libspdk_bdev_raid.so 00:03:10.600 LIB libspdk_bdev_ftl.a 00:03:10.600 SO libspdk_bdev_ftl.so.6.0 00:03:10.600 SYMLINK libspdk_bdev_ftl.so 00:03:10.859 LIB libspdk_bdev_iscsi.a 00:03:10.859 SO libspdk_bdev_iscsi.so.6.0 00:03:10.859 SYMLINK libspdk_bdev_iscsi.so 00:03:10.859 LIB libspdk_bdev_virtio.a 00:03:11.118 SO libspdk_bdev_virtio.so.6.0 00:03:11.118 SYMLINK libspdk_bdev_virtio.so 00:03:12.092 LIB libspdk_bdev_nvme.a 00:03:12.092 SO libspdk_bdev_nvme.so.7.1 00:03:12.092 SYMLINK libspdk_bdev_nvme.so 00:03:12.660 CC module/event/subsystems/keyring/keyring.o 00:03:12.660 CC module/event/subsystems/scheduler/scheduler.o 00:03:12.660 CC module/event/subsystems/fsdev/fsdev.o 00:03:12.660 CC module/event/subsystems/sock/sock.o 00:03:12.660 CC module/event/subsystems/iobuf/iobuf.o 00:03:12.660 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:12.660 CC module/event/subsystems/vmd/vmd.o 00:03:12.660 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:12.660 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:12.919 LIB libspdk_event_fsdev.a 00:03:12.919 LIB libspdk_event_keyring.a 00:03:12.919 LIB libspdk_event_scheduler.a 00:03:12.919 LIB libspdk_event_vhost_blk.a 00:03:12.919 LIB libspdk_event_vmd.a 00:03:12.919 SO libspdk_event_fsdev.so.1.0 00:03:12.919 SO libspdk_event_keyring.so.1.0 00:03:12.919 LIB libspdk_event_sock.a 00:03:12.919 LIB libspdk_event_iobuf.a 00:03:12.919 SO libspdk_event_scheduler.so.4.0 00:03:12.919 SO libspdk_event_vhost_blk.so.3.0 00:03:12.919 SO libspdk_event_sock.so.5.0 00:03:12.919 SO libspdk_event_vmd.so.6.0 00:03:12.919 SO libspdk_event_iobuf.so.3.0 00:03:12.919 SYMLINK libspdk_event_keyring.so 00:03:12.919 SYMLINK libspdk_event_fsdev.so 00:03:12.919 SYMLINK libspdk_event_scheduler.so 00:03:12.919 SYMLINK libspdk_event_sock.so 00:03:12.919 SYMLINK libspdk_event_vhost_blk.so 00:03:12.919 SYMLINK libspdk_event_vmd.so 00:03:12.919 SYMLINK libspdk_event_iobuf.so 00:03:13.179 CC module/event/subsystems/accel/accel.o 00:03:13.438 LIB libspdk_event_accel.a 00:03:13.438 SO libspdk_event_accel.so.6.0 00:03:13.438 SYMLINK libspdk_event_accel.so 00:03:13.698 CC module/event/subsystems/bdev/bdev.o 00:03:13.957 LIB libspdk_event_bdev.a 00:03:13.957 SO libspdk_event_bdev.so.6.0 00:03:14.217 SYMLINK libspdk_event_bdev.so 00:03:14.217 CC module/event/subsystems/nbd/nbd.o 00:03:14.217 CC module/event/subsystems/scsi/scsi.o 00:03:14.217 CC module/event/subsystems/ublk/ublk.o 00:03:14.217 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:14.217 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:14.476 LIB libspdk_event_nbd.a 00:03:14.476 LIB libspdk_event_ublk.a 00:03:14.476 LIB libspdk_event_scsi.a 00:03:14.476 SO libspdk_event_nbd.so.6.0 00:03:14.476 SO libspdk_event_ublk.so.3.0 00:03:14.476 SO libspdk_event_scsi.so.6.0 00:03:14.735 SYMLINK libspdk_event_nbd.so 00:03:14.735 SYMLINK libspdk_event_ublk.so 00:03:14.735 SYMLINK libspdk_event_scsi.so 00:03:14.735 LIB libspdk_event_nvmf.a 00:03:14.735 SO libspdk_event_nvmf.so.6.0 00:03:14.735 SYMLINK libspdk_event_nvmf.so 00:03:14.994 CC module/event/subsystems/iscsi/iscsi.o 00:03:14.994 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:14.994 LIB libspdk_event_vhost_scsi.a 00:03:14.994 LIB libspdk_event_iscsi.a 00:03:14.994 SO libspdk_event_vhost_scsi.so.3.0 00:03:15.253 SO libspdk_event_iscsi.so.6.0 00:03:15.253 SYMLINK libspdk_event_vhost_scsi.so 00:03:15.253 SYMLINK libspdk_event_iscsi.so 00:03:15.512 SO libspdk.so.6.0 00:03:15.512 SYMLINK libspdk.so 00:03:15.512 CC app/trace_record/trace_record.o 00:03:15.512 CC app/spdk_lspci/spdk_lspci.o 00:03:15.771 CC app/spdk_nvme_perf/perf.o 00:03:15.771 CXX app/trace/trace.o 00:03:15.771 CC app/iscsi_tgt/iscsi_tgt.o 00:03:15.771 CC app/nvmf_tgt/nvmf_main.o 00:03:15.771 CC app/spdk_tgt/spdk_tgt.o 00:03:15.771 CC test/thread/poller_perf/poller_perf.o 00:03:15.771 CC examples/ioat/perf/perf.o 00:03:15.771 CC examples/util/zipf/zipf.o 00:03:15.771 LINK spdk_lspci 00:03:16.029 LINK spdk_trace_record 00:03:16.029 LINK zipf 00:03:16.029 LINK poller_perf 00:03:16.029 LINK spdk_tgt 00:03:16.029 LINK iscsi_tgt 00:03:16.029 LINK nvmf_tgt 00:03:16.029 LINK ioat_perf 00:03:16.029 LINK spdk_trace 00:03:16.029 CC app/spdk_nvme_identify/identify.o 00:03:16.287 CC app/spdk_nvme_discover/discovery_aer.o 00:03:16.287 CC app/spdk_top/spdk_top.o 00:03:16.287 CC examples/ioat/verify/verify.o 00:03:16.287 CC app/spdk_dd/spdk_dd.o 00:03:16.287 CC test/dma/test_dma/test_dma.o 00:03:16.287 CC app/fio/nvme/fio_plugin.o 00:03:16.545 CC test/app/bdev_svc/bdev_svc.o 00:03:16.545 LINK spdk_nvme_discover 00:03:16.545 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:16.545 LINK verify 00:03:16.545 LINK spdk_nvme_perf 00:03:16.545 LINK bdev_svc 00:03:16.804 CC app/vhost/vhost.o 00:03:16.804 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:16.804 LINK spdk_dd 00:03:16.804 TEST_HEADER include/spdk/accel.h 00:03:16.804 TEST_HEADER include/spdk/accel_module.h 00:03:16.804 TEST_HEADER include/spdk/assert.h 00:03:16.804 TEST_HEADER include/spdk/barrier.h 00:03:16.804 TEST_HEADER include/spdk/base64.h 00:03:16.804 TEST_HEADER include/spdk/bdev.h 00:03:16.804 TEST_HEADER include/spdk/bdev_module.h 00:03:16.804 TEST_HEADER include/spdk/bdev_zone.h 00:03:16.804 TEST_HEADER include/spdk/bit_array.h 00:03:16.804 TEST_HEADER include/spdk/bit_pool.h 00:03:16.804 TEST_HEADER include/spdk/blob_bdev.h 00:03:16.804 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:16.804 TEST_HEADER include/spdk/blobfs.h 00:03:16.804 TEST_HEADER include/spdk/blob.h 00:03:16.804 TEST_HEADER include/spdk/conf.h 00:03:16.804 TEST_HEADER include/spdk/config.h 00:03:16.804 TEST_HEADER include/spdk/cpuset.h 00:03:16.804 TEST_HEADER include/spdk/crc16.h 00:03:16.804 TEST_HEADER include/spdk/crc32.h 00:03:16.804 TEST_HEADER include/spdk/crc64.h 00:03:16.804 TEST_HEADER include/spdk/dif.h 00:03:16.804 TEST_HEADER include/spdk/dma.h 00:03:16.804 TEST_HEADER include/spdk/endian.h 00:03:16.804 TEST_HEADER include/spdk/env_dpdk.h 00:03:16.804 TEST_HEADER include/spdk/env.h 00:03:16.804 TEST_HEADER include/spdk/event.h 00:03:16.804 TEST_HEADER include/spdk/fd_group.h 00:03:16.804 TEST_HEADER include/spdk/fd.h 00:03:16.804 TEST_HEADER include/spdk/file.h 00:03:16.804 TEST_HEADER include/spdk/fsdev.h 00:03:16.804 TEST_HEADER include/spdk/fsdev_module.h 00:03:16.804 TEST_HEADER include/spdk/ftl.h 00:03:16.804 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:16.804 TEST_HEADER include/spdk/gpt_spec.h 00:03:16.804 TEST_HEADER include/spdk/hexlify.h 00:03:16.804 TEST_HEADER include/spdk/histogram_data.h 00:03:16.804 LINK test_dma 00:03:16.804 TEST_HEADER include/spdk/idxd.h 00:03:16.804 TEST_HEADER include/spdk/idxd_spec.h 00:03:16.804 TEST_HEADER include/spdk/init.h 00:03:16.804 TEST_HEADER include/spdk/ioat.h 00:03:16.804 TEST_HEADER include/spdk/ioat_spec.h 00:03:16.804 TEST_HEADER include/spdk/iscsi_spec.h 00:03:16.804 TEST_HEADER include/spdk/json.h 00:03:16.804 TEST_HEADER include/spdk/jsonrpc.h 00:03:16.804 TEST_HEADER include/spdk/keyring.h 00:03:16.804 LINK nvme_fuzz 00:03:16.804 TEST_HEADER include/spdk/keyring_module.h 00:03:16.804 TEST_HEADER include/spdk/likely.h 00:03:16.804 TEST_HEADER include/spdk/log.h 00:03:16.804 TEST_HEADER include/spdk/lvol.h 00:03:16.804 TEST_HEADER include/spdk/md5.h 00:03:16.804 LINK spdk_nvme_identify 00:03:16.804 TEST_HEADER include/spdk/memory.h 00:03:16.804 TEST_HEADER include/spdk/mmio.h 00:03:16.804 TEST_HEADER include/spdk/nbd.h 00:03:16.804 TEST_HEADER include/spdk/net.h 00:03:16.804 TEST_HEADER include/spdk/notify.h 00:03:16.804 TEST_HEADER include/spdk/nvme.h 00:03:16.804 LINK spdk_nvme 00:03:16.804 TEST_HEADER include/spdk/nvme_intel.h 00:03:16.804 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:16.804 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:16.804 TEST_HEADER include/spdk/nvme_spec.h 00:03:16.804 TEST_HEADER include/spdk/nvme_zns.h 00:03:16.804 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:16.804 LINK vhost 00:03:16.804 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:17.063 TEST_HEADER include/spdk/nvmf.h 00:03:17.063 TEST_HEADER include/spdk/nvmf_spec.h 00:03:17.063 TEST_HEADER include/spdk/nvmf_transport.h 00:03:17.063 TEST_HEADER include/spdk/opal.h 00:03:17.063 TEST_HEADER include/spdk/opal_spec.h 00:03:17.063 TEST_HEADER include/spdk/pci_ids.h 00:03:17.063 TEST_HEADER include/spdk/pipe.h 00:03:17.063 TEST_HEADER include/spdk/queue.h 00:03:17.063 TEST_HEADER include/spdk/reduce.h 00:03:17.063 TEST_HEADER include/spdk/rpc.h 00:03:17.063 TEST_HEADER include/spdk/scheduler.h 00:03:17.063 TEST_HEADER include/spdk/scsi.h 00:03:17.063 TEST_HEADER include/spdk/scsi_spec.h 00:03:17.063 LINK interrupt_tgt 00:03:17.063 TEST_HEADER include/spdk/sock.h 00:03:17.063 TEST_HEADER include/spdk/stdinc.h 00:03:17.063 TEST_HEADER include/spdk/string.h 00:03:17.063 TEST_HEADER include/spdk/thread.h 00:03:17.063 TEST_HEADER include/spdk/trace.h 00:03:17.063 TEST_HEADER include/spdk/trace_parser.h 00:03:17.063 TEST_HEADER include/spdk/tree.h 00:03:17.063 TEST_HEADER include/spdk/ublk.h 00:03:17.063 TEST_HEADER include/spdk/util.h 00:03:17.063 TEST_HEADER include/spdk/uuid.h 00:03:17.063 TEST_HEADER include/spdk/version.h 00:03:17.063 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:17.063 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:17.063 TEST_HEADER include/spdk/vhost.h 00:03:17.063 TEST_HEADER include/spdk/vmd.h 00:03:17.063 TEST_HEADER include/spdk/xor.h 00:03:17.063 TEST_HEADER include/spdk/zipf.h 00:03:17.063 CXX test/cpp_headers/accel.o 00:03:17.063 CXX test/cpp_headers/accel_module.o 00:03:17.063 CC test/env/mem_callbacks/mem_callbacks.o 00:03:17.063 LINK spdk_top 00:03:17.063 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:17.063 CC app/fio/bdev/fio_plugin.o 00:03:17.063 CC test/event/event_perf/event_perf.o 00:03:17.063 CC test/event/reactor/reactor.o 00:03:17.322 CC test/app/histogram_perf/histogram_perf.o 00:03:17.322 CXX test/cpp_headers/assert.o 00:03:17.322 CXX test/cpp_headers/barrier.o 00:03:17.322 CC test/app/jsoncat/jsoncat.o 00:03:17.322 LINK event_perf 00:03:17.322 LINK reactor 00:03:17.322 CC examples/thread/thread/thread_ex.o 00:03:17.322 LINK histogram_perf 00:03:17.322 LINK jsoncat 00:03:17.322 CXX test/cpp_headers/base64.o 00:03:17.322 CXX test/cpp_headers/bdev.o 00:03:17.580 CC test/app/stub/stub.o 00:03:17.580 CXX test/cpp_headers/bdev_module.o 00:03:17.580 CC test/event/reactor_perf/reactor_perf.o 00:03:17.580 CXX test/cpp_headers/bdev_zone.o 00:03:17.580 LINK thread 00:03:17.580 LINK spdk_bdev 00:03:17.580 LINK reactor_perf 00:03:17.580 CC test/event/app_repeat/app_repeat.o 00:03:17.580 LINK mem_callbacks 00:03:17.580 LINK stub 00:03:17.580 CXX test/cpp_headers/bit_array.o 00:03:17.839 CXX test/cpp_headers/bit_pool.o 00:03:17.839 CXX test/cpp_headers/blob_bdev.o 00:03:17.839 CC test/event/scheduler/scheduler.o 00:03:17.839 CXX test/cpp_headers/blobfs_bdev.o 00:03:17.839 LINK app_repeat 00:03:17.839 CC test/rpc_client/rpc_client_test.o 00:03:17.839 CC test/env/vtophys/vtophys.o 00:03:17.839 CXX test/cpp_headers/blobfs.o 00:03:17.839 CC examples/sock/hello_world/hello_sock.o 00:03:18.097 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:18.097 LINK scheduler 00:03:18.097 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:18.097 LINK vtophys 00:03:18.097 CC test/env/memory/memory_ut.o 00:03:18.097 LINK rpc_client_test 00:03:18.097 CXX test/cpp_headers/blob.o 00:03:18.097 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:18.097 CC test/env/pci/pci_ut.o 00:03:18.097 LINK env_dpdk_post_init 00:03:18.097 LINK hello_sock 00:03:18.356 CXX test/cpp_headers/conf.o 00:03:18.356 CC examples/vmd/lsvmd/lsvmd.o 00:03:18.356 CC test/accel/dif/dif.o 00:03:18.356 CC test/blobfs/mkfs/mkfs.o 00:03:18.356 CXX test/cpp_headers/config.o 00:03:18.615 LINK lsvmd 00:03:18.615 CXX test/cpp_headers/cpuset.o 00:03:18.615 LINK vhost_fuzz 00:03:18.615 CC examples/idxd/perf/perf.o 00:03:18.615 LINK pci_ut 00:03:18.615 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:18.615 CXX test/cpp_headers/crc16.o 00:03:18.615 LINK mkfs 00:03:18.615 LINK iscsi_fuzz 00:03:18.874 CC examples/vmd/led/led.o 00:03:18.874 CXX test/cpp_headers/crc32.o 00:03:18.874 LINK idxd_perf 00:03:18.874 LINK hello_fsdev 00:03:18.874 CC test/lvol/esnap/esnap.o 00:03:18.874 CXX test/cpp_headers/crc64.o 00:03:18.874 LINK led 00:03:19.133 LINK dif 00:03:19.133 CC examples/accel/perf/accel_perf.o 00:03:19.133 CC test/nvme/aer/aer.o 00:03:19.133 CC test/nvme/reset/reset.o 00:03:19.133 CC test/nvme/sgl/sgl.o 00:03:19.133 CXX test/cpp_headers/dif.o 00:03:19.392 LINK memory_ut 00:03:19.392 CXX test/cpp_headers/dma.o 00:03:19.392 CC examples/nvme/hello_world/hello_world.o 00:03:19.392 CC examples/blob/hello_world/hello_blob.o 00:03:19.392 CC test/nvme/e2edp/nvme_dp.o 00:03:19.392 LINK reset 00:03:19.392 LINK aer 00:03:19.392 LINK sgl 00:03:19.392 CXX test/cpp_headers/endian.o 00:03:19.651 LINK hello_world 00:03:19.651 LINK hello_blob 00:03:19.651 LINK accel_perf 00:03:19.651 LINK nvme_dp 00:03:19.651 CC test/nvme/overhead/overhead.o 00:03:19.651 CC test/nvme/err_injection/err_injection.o 00:03:19.651 CC examples/nvme/reconnect/reconnect.o 00:03:19.651 CXX test/cpp_headers/env_dpdk.o 00:03:19.651 CC test/bdev/bdevio/bdevio.o 00:03:19.910 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:19.911 LINK err_injection 00:03:19.911 CC test/nvme/startup/startup.o 00:03:19.911 CXX test/cpp_headers/env.o 00:03:19.911 LINK overhead 00:03:19.911 CC examples/blob/cli/blobcli.o 00:03:19.911 CC examples/bdev/hello_world/hello_bdev.o 00:03:19.911 LINK reconnect 00:03:19.911 LINK startup 00:03:20.170 CC examples/nvme/arbitration/arbitration.o 00:03:20.170 CXX test/cpp_headers/event.o 00:03:20.170 LINK bdevio 00:03:20.170 CC test/nvme/reserve/reserve.o 00:03:20.170 LINK hello_bdev 00:03:20.170 CXX test/cpp_headers/fd_group.o 00:03:20.170 CC examples/nvme/hotplug/hotplug.o 00:03:20.170 CXX test/cpp_headers/fd.o 00:03:20.429 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:20.429 LINK nvme_manage 00:03:20.429 LINK reserve 00:03:20.429 LINK blobcli 00:03:20.429 LINK arbitration 00:03:20.429 LINK cmb_copy 00:03:20.429 CXX test/cpp_headers/file.o 00:03:20.429 CC examples/nvme/abort/abort.o 00:03:20.429 LINK hotplug 00:03:20.689 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:20.689 CC examples/bdev/bdevperf/bdevperf.o 00:03:20.689 CC test/nvme/simple_copy/simple_copy.o 00:03:20.689 CXX test/cpp_headers/fsdev.o 00:03:20.689 CC test/nvme/connect_stress/connect_stress.o 00:03:20.689 CC test/nvme/boot_partition/boot_partition.o 00:03:20.689 CXX test/cpp_headers/fsdev_module.o 00:03:20.689 CC test/nvme/compliance/nvme_compliance.o 00:03:20.689 LINK pmr_persistence 00:03:20.947 LINK simple_copy 00:03:20.947 LINK boot_partition 00:03:20.947 LINK connect_stress 00:03:20.947 LINK abort 00:03:20.947 CXX test/cpp_headers/ftl.o 00:03:20.947 CC test/nvme/fused_ordering/fused_ordering.o 00:03:20.947 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:20.947 CXX test/cpp_headers/fuse_dispatcher.o 00:03:21.206 CXX test/cpp_headers/gpt_spec.o 00:03:21.206 LINK nvme_compliance 00:03:21.206 CC test/nvme/fdp/fdp.o 00:03:21.206 CC test/nvme/cuse/cuse.o 00:03:21.206 CXX test/cpp_headers/hexlify.o 00:03:21.206 LINK doorbell_aers 00:03:21.206 LINK fused_ordering 00:03:21.206 CXX test/cpp_headers/histogram_data.o 00:03:21.206 CXX test/cpp_headers/idxd.o 00:03:21.206 CXX test/cpp_headers/idxd_spec.o 00:03:21.463 CXX test/cpp_headers/init.o 00:03:21.463 CXX test/cpp_headers/ioat.o 00:03:21.463 CXX test/cpp_headers/ioat_spec.o 00:03:21.463 LINK bdevperf 00:03:21.463 CXX test/cpp_headers/iscsi_spec.o 00:03:21.463 CXX test/cpp_headers/json.o 00:03:21.463 CXX test/cpp_headers/jsonrpc.o 00:03:21.463 LINK fdp 00:03:21.463 CXX test/cpp_headers/keyring.o 00:03:21.463 CXX test/cpp_headers/keyring_module.o 00:03:21.463 CXX test/cpp_headers/likely.o 00:03:21.721 CXX test/cpp_headers/log.o 00:03:21.721 CXX test/cpp_headers/lvol.o 00:03:21.721 CXX test/cpp_headers/md5.o 00:03:21.721 CXX test/cpp_headers/memory.o 00:03:21.721 CXX test/cpp_headers/mmio.o 00:03:21.721 CXX test/cpp_headers/nbd.o 00:03:21.721 CXX test/cpp_headers/net.o 00:03:21.721 CXX test/cpp_headers/notify.o 00:03:21.721 CXX test/cpp_headers/nvme.o 00:03:21.721 CXX test/cpp_headers/nvme_intel.o 00:03:21.721 CXX test/cpp_headers/nvme_ocssd.o 00:03:21.979 CC examples/nvmf/nvmf/nvmf.o 00:03:21.979 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:21.979 CXX test/cpp_headers/nvme_spec.o 00:03:21.979 CXX test/cpp_headers/nvme_zns.o 00:03:21.979 CXX test/cpp_headers/nvmf_cmd.o 00:03:21.979 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:21.979 CXX test/cpp_headers/nvmf.o 00:03:21.979 CXX test/cpp_headers/nvmf_spec.o 00:03:21.979 CXX test/cpp_headers/nvmf_transport.o 00:03:22.237 CXX test/cpp_headers/opal.o 00:03:22.237 CXX test/cpp_headers/opal_spec.o 00:03:22.237 CXX test/cpp_headers/pci_ids.o 00:03:22.237 CXX test/cpp_headers/pipe.o 00:03:22.237 LINK nvmf 00:03:22.237 CXX test/cpp_headers/queue.o 00:03:22.237 CXX test/cpp_headers/reduce.o 00:03:22.237 CXX test/cpp_headers/rpc.o 00:03:22.237 CXX test/cpp_headers/scheduler.o 00:03:22.237 CXX test/cpp_headers/scsi.o 00:03:22.237 CXX test/cpp_headers/scsi_spec.o 00:03:22.237 CXX test/cpp_headers/sock.o 00:03:22.237 CXX test/cpp_headers/stdinc.o 00:03:22.496 CXX test/cpp_headers/string.o 00:03:22.496 CXX test/cpp_headers/thread.o 00:03:22.496 CXX test/cpp_headers/trace.o 00:03:22.496 LINK cuse 00:03:22.496 CXX test/cpp_headers/trace_parser.o 00:03:22.496 CXX test/cpp_headers/tree.o 00:03:22.496 CXX test/cpp_headers/ublk.o 00:03:22.496 CXX test/cpp_headers/util.o 00:03:22.496 CXX test/cpp_headers/uuid.o 00:03:22.496 CXX test/cpp_headers/version.o 00:03:22.496 CXX test/cpp_headers/vfio_user_pci.o 00:03:22.496 CXX test/cpp_headers/vfio_user_spec.o 00:03:22.496 CXX test/cpp_headers/vhost.o 00:03:22.782 CXX test/cpp_headers/vmd.o 00:03:22.782 CXX test/cpp_headers/xor.o 00:03:22.782 CXX test/cpp_headers/zipf.o 00:03:24.222 LINK esnap 00:03:24.507 00:03:24.507 real 1m28.673s 00:03:24.507 user 8m1.877s 00:03:24.507 sys 1m44.074s 00:03:24.507 ************************************ 00:03:24.507 END TEST make 00:03:24.507 ************************************ 00:03:24.507 13:42:23 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:24.507 13:42:23 make -- common/autotest_common.sh@10 -- $ set +x 00:03:24.507 13:42:23 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:24.507 13:42:23 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:24.507 13:42:23 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:24.507 13:42:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:24.507 13:42:23 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:24.507 13:42:23 -- pm/common@44 -- $ pid=5257 00:03:24.507 13:42:23 -- pm/common@50 -- $ kill -TERM 5257 00:03:24.507 13:42:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:24.507 13:42:23 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:24.507 13:42:23 -- pm/common@44 -- $ pid=5259 00:03:24.508 13:42:23 -- pm/common@50 -- $ kill -TERM 5259 00:03:24.508 13:42:23 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:24.508 13:42:23 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:24.508 13:42:23 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:24.508 13:42:23 -- common/autotest_common.sh@1711 -- # lcov --version 00:03:24.508 13:42:23 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:24.767 13:42:23 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:24.767 13:42:23 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:24.767 13:42:23 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:24.767 13:42:23 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:24.767 13:42:23 -- scripts/common.sh@336 -- # IFS=.-: 00:03:24.767 13:42:23 -- scripts/common.sh@336 -- # read -ra ver1 00:03:24.767 13:42:23 -- scripts/common.sh@337 -- # IFS=.-: 00:03:24.767 13:42:23 -- scripts/common.sh@337 -- # read -ra ver2 00:03:24.767 13:42:23 -- scripts/common.sh@338 -- # local 'op=<' 00:03:24.767 13:42:23 -- scripts/common.sh@340 -- # ver1_l=2 00:03:24.767 13:42:23 -- scripts/common.sh@341 -- # ver2_l=1 00:03:24.767 13:42:23 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:24.767 13:42:23 -- scripts/common.sh@344 -- # case "$op" in 00:03:24.767 13:42:23 -- scripts/common.sh@345 -- # : 1 00:03:24.767 13:42:23 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:24.767 13:42:23 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:24.767 13:42:23 -- scripts/common.sh@365 -- # decimal 1 00:03:24.767 13:42:23 -- scripts/common.sh@353 -- # local d=1 00:03:24.767 13:42:24 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:24.767 13:42:24 -- scripts/common.sh@355 -- # echo 1 00:03:24.767 13:42:24 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:24.767 13:42:24 -- scripts/common.sh@366 -- # decimal 2 00:03:24.767 13:42:24 -- scripts/common.sh@353 -- # local d=2 00:03:24.767 13:42:24 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:24.767 13:42:24 -- scripts/common.sh@355 -- # echo 2 00:03:24.767 13:42:24 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:24.767 13:42:24 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:24.767 13:42:24 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:24.767 13:42:24 -- scripts/common.sh@368 -- # return 0 00:03:24.767 13:42:24 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:24.767 13:42:24 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:24.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:24.767 --rc genhtml_branch_coverage=1 00:03:24.767 --rc genhtml_function_coverage=1 00:03:24.767 --rc genhtml_legend=1 00:03:24.767 --rc geninfo_all_blocks=1 00:03:24.767 --rc geninfo_unexecuted_blocks=1 00:03:24.767 00:03:24.767 ' 00:03:24.767 13:42:24 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:24.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:24.767 --rc genhtml_branch_coverage=1 00:03:24.767 --rc genhtml_function_coverage=1 00:03:24.767 --rc genhtml_legend=1 00:03:24.767 --rc geninfo_all_blocks=1 00:03:24.767 --rc geninfo_unexecuted_blocks=1 00:03:24.767 00:03:24.767 ' 00:03:24.767 13:42:24 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:24.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:24.767 --rc genhtml_branch_coverage=1 00:03:24.767 --rc genhtml_function_coverage=1 00:03:24.767 --rc genhtml_legend=1 00:03:24.767 --rc geninfo_all_blocks=1 00:03:24.767 --rc geninfo_unexecuted_blocks=1 00:03:24.767 00:03:24.767 ' 00:03:24.767 13:42:24 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:24.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:24.767 --rc genhtml_branch_coverage=1 00:03:24.767 --rc genhtml_function_coverage=1 00:03:24.767 --rc genhtml_legend=1 00:03:24.767 --rc geninfo_all_blocks=1 00:03:24.767 --rc geninfo_unexecuted_blocks=1 00:03:24.767 00:03:24.767 ' 00:03:24.767 13:42:24 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:24.767 13:42:24 -- nvmf/common.sh@7 -- # uname -s 00:03:24.767 13:42:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:24.767 13:42:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:24.767 13:42:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:24.767 13:42:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:24.767 13:42:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:24.767 13:42:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:24.767 13:42:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:24.767 13:42:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:24.767 13:42:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:24.767 13:42:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:24.767 13:42:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:03:24.767 13:42:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=cfa2def7-c8af-457f-82a0-b312efdea7f4 00:03:24.767 13:42:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:24.767 13:42:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:24.767 13:42:24 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:03:24.767 13:42:24 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:24.767 13:42:24 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:24.767 13:42:24 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:24.767 13:42:24 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:24.767 13:42:24 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:24.767 13:42:24 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:24.767 13:42:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:24.767 13:42:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:24.767 13:42:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:24.767 13:42:24 -- paths/export.sh@5 -- # export PATH 00:03:24.767 13:42:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:24.767 13:42:24 -- nvmf/common.sh@51 -- # : 0 00:03:24.767 13:42:24 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:24.767 13:42:24 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:24.767 13:42:24 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:24.767 13:42:24 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:24.767 13:42:24 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:24.767 13:42:24 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:24.767 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:24.767 13:42:24 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:24.767 13:42:24 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:24.767 13:42:24 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:24.767 13:42:24 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:24.767 13:42:24 -- spdk/autotest.sh@32 -- # uname -s 00:03:24.767 13:42:24 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:24.767 13:42:24 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:24.767 13:42:24 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:24.767 13:42:24 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:24.767 13:42:24 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:24.767 13:42:24 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:24.767 13:42:24 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:24.767 13:42:24 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:24.767 13:42:24 -- spdk/autotest.sh@48 -- # udevadm_pid=54358 00:03:24.767 13:42:24 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:24.767 13:42:24 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:24.767 13:42:24 -- pm/common@17 -- # local monitor 00:03:24.767 13:42:24 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:24.767 13:42:24 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:24.767 13:42:24 -- pm/common@25 -- # sleep 1 00:03:24.767 13:42:24 -- pm/common@21 -- # date +%s 00:03:24.767 13:42:24 -- pm/common@21 -- # date +%s 00:03:24.768 13:42:24 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733492544 00:03:24.768 13:42:24 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733492544 00:03:24.768 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733492544_collect-cpu-load.pm.log 00:03:24.768 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733492544_collect-vmstat.pm.log 00:03:26.144 13:42:25 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:26.144 13:42:25 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:26.144 13:42:25 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:26.144 13:42:25 -- common/autotest_common.sh@10 -- # set +x 00:03:26.144 13:42:25 -- spdk/autotest.sh@59 -- # create_test_list 00:03:26.144 13:42:25 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:26.144 13:42:25 -- common/autotest_common.sh@10 -- # set +x 00:03:26.144 13:42:25 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:26.144 13:42:25 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:26.144 13:42:25 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:26.144 13:42:25 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:26.144 13:42:25 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:26.144 13:42:25 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:26.144 13:42:25 -- common/autotest_common.sh@1457 -- # uname 00:03:26.144 13:42:25 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:26.144 13:42:25 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:26.144 13:42:25 -- common/autotest_common.sh@1477 -- # uname 00:03:26.144 13:42:25 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:26.144 13:42:25 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:26.144 13:42:25 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:26.144 lcov: LCOV version 1.15 00:03:26.144 13:42:25 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:41.025 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:41.025 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:03:55.923 13:42:53 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:55.923 13:42:53 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:55.923 13:42:53 -- common/autotest_common.sh@10 -- # set +x 00:03:55.923 13:42:53 -- spdk/autotest.sh@78 -- # rm -f 00:03:55.923 13:42:53 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:55.923 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:55.923 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:03:55.923 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:03:55.923 13:42:53 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:55.923 13:42:53 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:55.923 13:42:53 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:55.923 13:42:53 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:03:55.923 13:42:53 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:03:55.923 13:42:53 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:03:55.923 13:42:53 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:03:55.923 13:42:53 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:03:55.923 13:42:53 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:55.923 13:42:53 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:03:55.923 13:42:53 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:55.923 13:42:53 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:55.923 13:42:53 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:55.923 13:42:53 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:03:55.924 13:42:53 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:03:55.924 13:42:53 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:55.924 13:42:53 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:03:55.924 13:42:53 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:03:55.924 13:42:53 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:55.924 13:42:53 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:55.924 13:42:53 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:55.924 13:42:53 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n2 00:03:55.924 13:42:53 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:03:55.924 13:42:53 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:55.924 13:42:53 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:55.924 13:42:53 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:55.924 13:42:53 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n3 00:03:55.924 13:42:53 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:03:55.924 13:42:53 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:55.924 13:42:53 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:55.924 13:42:53 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:55.924 13:42:53 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:55.924 13:42:53 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:55.924 13:42:53 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:55.924 13:42:53 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:55.924 13:42:53 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:55.924 No valid GPT data, bailing 00:03:55.924 13:42:54 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:55.924 13:42:54 -- scripts/common.sh@394 -- # pt= 00:03:55.924 13:42:54 -- scripts/common.sh@395 -- # return 1 00:03:55.924 13:42:54 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:55.924 1+0 records in 00:03:55.924 1+0 records out 00:03:55.924 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0057802 s, 181 MB/s 00:03:55.924 13:42:54 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:55.924 13:42:54 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:55.924 13:42:54 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:03:55.924 13:42:54 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:03:55.924 13:42:54 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:55.924 No valid GPT data, bailing 00:03:55.924 13:42:54 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:55.924 13:42:54 -- scripts/common.sh@394 -- # pt= 00:03:55.924 13:42:54 -- scripts/common.sh@395 -- # return 1 00:03:55.924 13:42:54 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:55.924 1+0 records in 00:03:55.924 1+0 records out 00:03:55.924 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00515692 s, 203 MB/s 00:03:55.924 13:42:54 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:55.924 13:42:54 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:55.924 13:42:54 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:03:55.924 13:42:54 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:03:55.924 13:42:54 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:03:55.924 No valid GPT data, bailing 00:03:55.924 13:42:54 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:03:55.924 13:42:54 -- scripts/common.sh@394 -- # pt= 00:03:55.924 13:42:54 -- scripts/common.sh@395 -- # return 1 00:03:55.924 13:42:54 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:03:55.924 1+0 records in 00:03:55.924 1+0 records out 00:03:55.924 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00497108 s, 211 MB/s 00:03:55.924 13:42:54 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:55.924 13:42:54 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:55.924 13:42:54 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:03:55.924 13:42:54 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:03:55.924 13:42:54 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:03:55.924 No valid GPT data, bailing 00:03:55.924 13:42:54 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:03:55.924 13:42:54 -- scripts/common.sh@394 -- # pt= 00:03:55.924 13:42:54 -- scripts/common.sh@395 -- # return 1 00:03:55.924 13:42:54 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:03:55.924 1+0 records in 00:03:55.924 1+0 records out 00:03:55.924 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00449637 s, 233 MB/s 00:03:55.924 13:42:54 -- spdk/autotest.sh@105 -- # sync 00:03:55.924 13:42:54 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:55.924 13:42:54 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:55.924 13:42:54 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:57.833 13:42:56 -- spdk/autotest.sh@111 -- # uname -s 00:03:57.833 13:42:56 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:57.833 13:42:56 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:57.833 13:42:56 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:58.091 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:58.091 Hugepages 00:03:58.091 node hugesize free / total 00:03:58.091 node0 1048576kB 0 / 0 00:03:58.091 node0 2048kB 0 / 0 00:03:58.091 00:03:58.091 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:58.091 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:58.349 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:03:58.349 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:03:58.349 13:42:57 -- spdk/autotest.sh@117 -- # uname -s 00:03:58.349 13:42:57 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:58.349 13:42:57 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:58.349 13:42:57 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:58.916 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:59.174 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:59.174 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:59.174 13:42:58 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:00.168 13:42:59 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:00.168 13:42:59 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:00.168 13:42:59 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:00.168 13:42:59 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:00.168 13:42:59 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:00.168 13:42:59 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:00.168 13:42:59 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:00.168 13:42:59 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:00.168 13:42:59 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:00.426 13:42:59 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:00.426 13:42:59 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:00.426 13:42:59 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:00.686 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:00.686 Waiting for block devices as requested 00:04:00.686 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:00.945 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:00.945 13:43:00 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:00.945 13:43:00 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:00.945 13:43:00 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:00.945 13:43:00 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:04:00.945 13:43:00 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:00.945 13:43:00 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:00.945 13:43:00 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:00.945 13:43:00 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:00.945 13:43:00 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:04:00.945 13:43:00 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:04:00.945 13:43:00 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:04:00.945 13:43:00 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:00.945 13:43:00 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:00.945 13:43:00 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:00.945 13:43:00 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:00.945 13:43:00 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:00.945 13:43:00 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:00.945 13:43:00 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:00.945 13:43:00 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:00.945 13:43:00 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:00.945 13:43:00 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:00.945 13:43:00 -- common/autotest_common.sh@1543 -- # continue 00:04:00.945 13:43:00 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:00.945 13:43:00 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:00.945 13:43:00 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:00.945 13:43:00 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:04:00.945 13:43:00 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:00.945 13:43:00 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:00.945 13:43:00 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:00.945 13:43:00 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:00.945 13:43:00 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:00.945 13:43:00 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:00.945 13:43:00 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:00.945 13:43:00 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:00.945 13:43:00 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:00.945 13:43:00 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:00.945 13:43:00 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:00.945 13:43:00 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:00.945 13:43:00 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:00.945 13:43:00 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:00.945 13:43:00 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:00.945 13:43:00 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:00.945 13:43:00 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:00.945 13:43:00 -- common/autotest_common.sh@1543 -- # continue 00:04:00.945 13:43:00 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:00.945 13:43:00 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:00.945 13:43:00 -- common/autotest_common.sh@10 -- # set +x 00:04:00.945 13:43:00 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:00.945 13:43:00 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:00.945 13:43:00 -- common/autotest_common.sh@10 -- # set +x 00:04:00.945 13:43:00 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:01.882 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:01.882 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:01.882 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:01.882 13:43:01 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:01.882 13:43:01 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:01.882 13:43:01 -- common/autotest_common.sh@10 -- # set +x 00:04:01.882 13:43:01 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:01.882 13:43:01 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:01.882 13:43:01 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:01.882 13:43:01 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:01.882 13:43:01 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:01.882 13:43:01 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:01.882 13:43:01 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:01.882 13:43:01 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:01.882 13:43:01 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:01.882 13:43:01 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:01.882 13:43:01 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:01.882 13:43:01 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:01.882 13:43:01 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:01.882 13:43:01 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:01.882 13:43:01 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:01.882 13:43:01 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:01.882 13:43:01 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:01.882 13:43:01 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:01.882 13:43:01 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:01.882 13:43:01 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:01.882 13:43:01 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:02.141 13:43:01 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:02.141 13:43:01 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:02.141 13:43:01 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:02.141 13:43:01 -- common/autotest_common.sh@1572 -- # return 0 00:04:02.141 13:43:01 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:02.141 13:43:01 -- common/autotest_common.sh@1580 -- # return 0 00:04:02.141 13:43:01 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:02.141 13:43:01 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:02.141 13:43:01 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:02.141 13:43:01 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:02.141 13:43:01 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:02.141 13:43:01 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:02.141 13:43:01 -- common/autotest_common.sh@10 -- # set +x 00:04:02.141 13:43:01 -- spdk/autotest.sh@151 -- # [[ 1 -eq 1 ]] 00:04:02.141 13:43:01 -- spdk/autotest.sh@152 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:04:02.141 13:43:01 -- spdk/autotest.sh@152 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:04:02.141 13:43:01 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:02.141 13:43:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:02.141 13:43:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:02.141 13:43:01 -- common/autotest_common.sh@10 -- # set +x 00:04:02.141 ************************************ 00:04:02.141 START TEST env 00:04:02.141 ************************************ 00:04:02.141 13:43:01 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:02.141 * Looking for test storage... 00:04:02.141 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:02.141 13:43:01 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:02.141 13:43:01 env -- common/autotest_common.sh@1711 -- # lcov --version 00:04:02.141 13:43:01 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:02.141 13:43:01 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:02.141 13:43:01 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:02.141 13:43:01 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:02.141 13:43:01 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:02.141 13:43:01 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:02.141 13:43:01 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:02.141 13:43:01 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:02.141 13:43:01 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:02.141 13:43:01 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:02.141 13:43:01 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:02.141 13:43:01 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:02.141 13:43:01 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:02.141 13:43:01 env -- scripts/common.sh@344 -- # case "$op" in 00:04:02.141 13:43:01 env -- scripts/common.sh@345 -- # : 1 00:04:02.141 13:43:01 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:02.141 13:43:01 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:02.141 13:43:01 env -- scripts/common.sh@365 -- # decimal 1 00:04:02.141 13:43:01 env -- scripts/common.sh@353 -- # local d=1 00:04:02.141 13:43:01 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:02.141 13:43:01 env -- scripts/common.sh@355 -- # echo 1 00:04:02.141 13:43:01 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:02.141 13:43:01 env -- scripts/common.sh@366 -- # decimal 2 00:04:02.141 13:43:01 env -- scripts/common.sh@353 -- # local d=2 00:04:02.141 13:43:01 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:02.141 13:43:01 env -- scripts/common.sh@355 -- # echo 2 00:04:02.141 13:43:01 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:02.141 13:43:01 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:02.141 13:43:01 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:02.141 13:43:01 env -- scripts/common.sh@368 -- # return 0 00:04:02.141 13:43:01 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:02.141 13:43:01 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:02.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.141 --rc genhtml_branch_coverage=1 00:04:02.141 --rc genhtml_function_coverage=1 00:04:02.141 --rc genhtml_legend=1 00:04:02.141 --rc geninfo_all_blocks=1 00:04:02.141 --rc geninfo_unexecuted_blocks=1 00:04:02.141 00:04:02.141 ' 00:04:02.141 13:43:01 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:02.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.141 --rc genhtml_branch_coverage=1 00:04:02.141 --rc genhtml_function_coverage=1 00:04:02.141 --rc genhtml_legend=1 00:04:02.141 --rc geninfo_all_blocks=1 00:04:02.141 --rc geninfo_unexecuted_blocks=1 00:04:02.141 00:04:02.141 ' 00:04:02.141 13:43:01 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:02.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.141 --rc genhtml_branch_coverage=1 00:04:02.141 --rc genhtml_function_coverage=1 00:04:02.141 --rc genhtml_legend=1 00:04:02.141 --rc geninfo_all_blocks=1 00:04:02.141 --rc geninfo_unexecuted_blocks=1 00:04:02.141 00:04:02.141 ' 00:04:02.141 13:43:01 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:02.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.141 --rc genhtml_branch_coverage=1 00:04:02.141 --rc genhtml_function_coverage=1 00:04:02.141 --rc genhtml_legend=1 00:04:02.141 --rc geninfo_all_blocks=1 00:04:02.141 --rc geninfo_unexecuted_blocks=1 00:04:02.141 00:04:02.141 ' 00:04:02.142 13:43:01 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:02.142 13:43:01 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:02.142 13:43:01 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:02.142 13:43:01 env -- common/autotest_common.sh@10 -- # set +x 00:04:02.142 ************************************ 00:04:02.142 START TEST env_memory 00:04:02.142 ************************************ 00:04:02.142 13:43:01 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:02.142 00:04:02.142 00:04:02.142 CUnit - A unit testing framework for C - Version 2.1-3 00:04:02.142 http://cunit.sourceforge.net/ 00:04:02.142 00:04:02.142 00:04:02.142 Suite: memory 00:04:02.400 Test: alloc and free memory map ...[2024-12-06 13:43:01.556842] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:02.400 passed 00:04:02.400 Test: mem map translation ...[2024-12-06 13:43:01.587629] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:02.400 [2024-12-06 13:43:01.587669] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:02.400 [2024-12-06 13:43:01.587724] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:02.400 [2024-12-06 13:43:01.587735] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:02.400 passed 00:04:02.400 Test: mem map registration ...[2024-12-06 13:43:01.651310] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:02.400 [2024-12-06 13:43:01.651364] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:02.400 passed 00:04:02.400 Test: mem map adjacent registrations ...passed 00:04:02.400 00:04:02.400 Run Summary: Type Total Ran Passed Failed Inactive 00:04:02.400 suites 1 1 n/a 0 0 00:04:02.400 tests 4 4 4 0 0 00:04:02.400 asserts 152 152 152 0 n/a 00:04:02.400 00:04:02.400 Elapsed time = 0.214 seconds 00:04:02.400 00:04:02.400 real 0m0.229s 00:04:02.400 user 0m0.211s 00:04:02.400 sys 0m0.015s 00:04:02.400 13:43:01 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:02.400 13:43:01 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:02.400 ************************************ 00:04:02.400 END TEST env_memory 00:04:02.400 ************************************ 00:04:02.400 13:43:01 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:02.400 13:43:01 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:02.400 13:43:01 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:02.401 13:43:01 env -- common/autotest_common.sh@10 -- # set +x 00:04:02.401 ************************************ 00:04:02.401 START TEST env_vtophys 00:04:02.401 ************************************ 00:04:02.401 13:43:01 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:02.660 EAL: lib.eal log level changed from notice to debug 00:04:02.660 EAL: Detected lcore 0 as core 0 on socket 0 00:04:02.660 EAL: Detected lcore 1 as core 0 on socket 0 00:04:02.660 EAL: Detected lcore 2 as core 0 on socket 0 00:04:02.660 EAL: Detected lcore 3 as core 0 on socket 0 00:04:02.660 EAL: Detected lcore 4 as core 0 on socket 0 00:04:02.660 EAL: Detected lcore 5 as core 0 on socket 0 00:04:02.660 EAL: Detected lcore 6 as core 0 on socket 0 00:04:02.660 EAL: Detected lcore 7 as core 0 on socket 0 00:04:02.660 EAL: Detected lcore 8 as core 0 on socket 0 00:04:02.660 EAL: Detected lcore 9 as core 0 on socket 0 00:04:02.660 EAL: Maximum logical cores by configuration: 128 00:04:02.660 EAL: Detected CPU lcores: 10 00:04:02.660 EAL: Detected NUMA nodes: 1 00:04:02.660 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:02.660 EAL: Detected shared linkage of DPDK 00:04:02.660 EAL: No shared files mode enabled, IPC will be disabled 00:04:02.660 EAL: Selected IOVA mode 'PA' 00:04:02.660 EAL: Probing VFIO support... 00:04:02.660 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:02.660 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:02.660 EAL: Ask a virtual area of 0x2e000 bytes 00:04:02.660 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:02.660 EAL: Setting up physically contiguous memory... 00:04:02.660 EAL: Setting maximum number of open files to 524288 00:04:02.660 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:02.660 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:02.660 EAL: Ask a virtual area of 0x61000 bytes 00:04:02.660 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:02.660 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:02.660 EAL: Ask a virtual area of 0x400000000 bytes 00:04:02.660 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:02.660 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:02.660 EAL: Ask a virtual area of 0x61000 bytes 00:04:02.660 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:02.660 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:02.660 EAL: Ask a virtual area of 0x400000000 bytes 00:04:02.660 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:02.660 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:02.660 EAL: Ask a virtual area of 0x61000 bytes 00:04:02.660 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:02.660 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:02.660 EAL: Ask a virtual area of 0x400000000 bytes 00:04:02.660 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:02.660 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:02.660 EAL: Ask a virtual area of 0x61000 bytes 00:04:02.660 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:02.660 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:02.660 EAL: Ask a virtual area of 0x400000000 bytes 00:04:02.660 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:02.660 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:02.660 EAL: Hugepages will be freed exactly as allocated. 00:04:02.660 EAL: No shared files mode enabled, IPC is disabled 00:04:02.661 EAL: No shared files mode enabled, IPC is disabled 00:04:02.661 EAL: TSC frequency is ~2200000 KHz 00:04:02.661 EAL: Main lcore 0 is ready (tid=7f4694624a00;cpuset=[0]) 00:04:02.661 EAL: Trying to obtain current memory policy. 00:04:02.661 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:02.661 EAL: Restoring previous memory policy: 0 00:04:02.661 EAL: request: mp_malloc_sync 00:04:02.661 EAL: No shared files mode enabled, IPC is disabled 00:04:02.661 EAL: Heap on socket 0 was expanded by 2MB 00:04:02.661 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:02.661 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:02.661 EAL: Mem event callback 'spdk:(nil)' registered 00:04:02.661 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:02.661 00:04:02.661 00:04:02.661 CUnit - A unit testing framework for C - Version 2.1-3 00:04:02.661 http://cunit.sourceforge.net/ 00:04:02.661 00:04:02.661 00:04:02.661 Suite: components_suite 00:04:02.661 Test: vtophys_malloc_test ...passed 00:04:02.661 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:02.661 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:02.661 EAL: Restoring previous memory policy: 4 00:04:02.661 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.661 EAL: request: mp_malloc_sync 00:04:02.661 EAL: No shared files mode enabled, IPC is disabled 00:04:02.661 EAL: Heap on socket 0 was expanded by 4MB 00:04:02.661 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.661 EAL: request: mp_malloc_sync 00:04:02.661 EAL: No shared files mode enabled, IPC is disabled 00:04:02.661 EAL: Heap on socket 0 was shrunk by 4MB 00:04:02.661 EAL: Trying to obtain current memory policy. 00:04:02.661 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:02.661 EAL: Restoring previous memory policy: 4 00:04:02.661 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.661 EAL: request: mp_malloc_sync 00:04:02.661 EAL: No shared files mode enabled, IPC is disabled 00:04:02.661 EAL: Heap on socket 0 was expanded by 6MB 00:04:02.661 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.661 EAL: request: mp_malloc_sync 00:04:02.661 EAL: No shared files mode enabled, IPC is disabled 00:04:02.661 EAL: Heap on socket 0 was shrunk by 6MB 00:04:02.661 EAL: Trying to obtain current memory policy. 00:04:02.661 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:02.661 EAL: Restoring previous memory policy: 4 00:04:02.661 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.661 EAL: request: mp_malloc_sync 00:04:02.661 EAL: No shared files mode enabled, IPC is disabled 00:04:02.661 EAL: Heap on socket 0 was expanded by 10MB 00:04:02.661 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.661 EAL: request: mp_malloc_sync 00:04:02.661 EAL: No shared files mode enabled, IPC is disabled 00:04:02.661 EAL: Heap on socket 0 was shrunk by 10MB 00:04:02.661 EAL: Trying to obtain current memory policy. 00:04:02.661 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:02.661 EAL: Restoring previous memory policy: 4 00:04:02.661 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.661 EAL: request: mp_malloc_sync 00:04:02.661 EAL: No shared files mode enabled, IPC is disabled 00:04:02.661 EAL: Heap on socket 0 was expanded by 18MB 00:04:02.661 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.661 EAL: request: mp_malloc_sync 00:04:02.661 EAL: No shared files mode enabled, IPC is disabled 00:04:02.661 EAL: Heap on socket 0 was shrunk by 18MB 00:04:02.661 EAL: Trying to obtain current memory policy. 00:04:02.661 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:02.661 EAL: Restoring previous memory policy: 4 00:04:02.661 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.661 EAL: request: mp_malloc_sync 00:04:02.661 EAL: No shared files mode enabled, IPC is disabled 00:04:02.661 EAL: Heap on socket 0 was expanded by 34MB 00:04:02.661 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.661 EAL: request: mp_malloc_sync 00:04:02.661 EAL: No shared files mode enabled, IPC is disabled 00:04:02.661 EAL: Heap on socket 0 was shrunk by 34MB 00:04:02.661 EAL: Trying to obtain current memory policy. 00:04:02.661 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:02.661 EAL: Restoring previous memory policy: 4 00:04:02.661 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.661 EAL: request: mp_malloc_sync 00:04:02.661 EAL: No shared files mode enabled, IPC is disabled 00:04:02.661 EAL: Heap on socket 0 was expanded by 66MB 00:04:02.661 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.661 EAL: request: mp_malloc_sync 00:04:02.661 EAL: No shared files mode enabled, IPC is disabled 00:04:02.661 EAL: Heap on socket 0 was shrunk by 66MB 00:04:02.661 EAL: Trying to obtain current memory policy. 00:04:02.661 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:02.919 EAL: Restoring previous memory policy: 4 00:04:02.919 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.919 EAL: request: mp_malloc_sync 00:04:02.919 EAL: No shared files mode enabled, IPC is disabled 00:04:02.920 EAL: Heap on socket 0 was expanded by 130MB 00:04:02.920 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.920 EAL: request: mp_malloc_sync 00:04:02.920 EAL: No shared files mode enabled, IPC is disabled 00:04:02.920 EAL: Heap on socket 0 was shrunk by 130MB 00:04:02.920 EAL: Trying to obtain current memory policy. 00:04:02.920 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:02.920 EAL: Restoring previous memory policy: 4 00:04:02.920 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.920 EAL: request: mp_malloc_sync 00:04:02.920 EAL: No shared files mode enabled, IPC is disabled 00:04:02.920 EAL: Heap on socket 0 was expanded by 258MB 00:04:02.920 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.178 EAL: request: mp_malloc_sync 00:04:03.178 EAL: No shared files mode enabled, IPC is disabled 00:04:03.178 EAL: Heap on socket 0 was shrunk by 258MB 00:04:03.178 EAL: Trying to obtain current memory policy. 00:04:03.178 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:03.178 EAL: Restoring previous memory policy: 4 00:04:03.178 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.178 EAL: request: mp_malloc_sync 00:04:03.178 EAL: No shared files mode enabled, IPC is disabled 00:04:03.178 EAL: Heap on socket 0 was expanded by 514MB 00:04:03.436 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.436 EAL: request: mp_malloc_sync 00:04:03.436 EAL: No shared files mode enabled, IPC is disabled 00:04:03.436 EAL: Heap on socket 0 was shrunk by 514MB 00:04:03.436 EAL: Trying to obtain current memory policy. 00:04:03.436 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.005 EAL: Restoring previous memory policy: 4 00:04:04.005 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.005 EAL: request: mp_malloc_sync 00:04:04.005 EAL: No shared files mode enabled, IPC is disabled 00:04:04.005 EAL: Heap on socket 0 was expanded by 1026MB 00:04:04.005 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.574 passed 00:04:04.574 00:04:04.574 Run Summary: Type Total Ran Passed Failed Inactive 00:04:04.574 suites 1 1 n/a 0 0 00:04:04.574 tests 2 2 2 0 0 00:04:04.574 asserts 5484 5484 5484 0 n/a 00:04:04.574 00:04:04.574 Elapsed time = 1.688 seconds 00:04:04.574 EAL: request: mp_malloc_sync 00:04:04.574 EAL: No shared files mode enabled, IPC is disabled 00:04:04.574 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:04.574 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.574 EAL: request: mp_malloc_sync 00:04:04.574 EAL: No shared files mode enabled, IPC is disabled 00:04:04.574 EAL: Heap on socket 0 was shrunk by 2MB 00:04:04.574 EAL: No shared files mode enabled, IPC is disabled 00:04:04.574 EAL: No shared files mode enabled, IPC is disabled 00:04:04.574 EAL: No shared files mode enabled, IPC is disabled 00:04:04.574 00:04:04.574 real 0m1.898s 00:04:04.574 user 0m1.103s 00:04:04.574 sys 0m0.662s 00:04:04.574 13:43:03 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:04.574 13:43:03 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:04.574 ************************************ 00:04:04.574 END TEST env_vtophys 00:04:04.574 ************************************ 00:04:04.574 13:43:03 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:04.574 13:43:03 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:04.574 13:43:03 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:04.574 13:43:03 env -- common/autotest_common.sh@10 -- # set +x 00:04:04.574 ************************************ 00:04:04.574 START TEST env_pci 00:04:04.574 ************************************ 00:04:04.574 13:43:03 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:04.574 00:04:04.574 00:04:04.574 CUnit - A unit testing framework for C - Version 2.1-3 00:04:04.574 http://cunit.sourceforge.net/ 00:04:04.574 00:04:04.574 00:04:04.574 Suite: pci 00:04:04.574 Test: pci_hook ...[2024-12-06 13:43:03.758468] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56561 has claimed it 00:04:04.574 passed 00:04:04.574 00:04:04.574 Run Summary: Type Total Ran Passed Failed Inactive 00:04:04.574 suites 1 1 n/a 0 0 00:04:04.574 tests 1 1 1 0 0 00:04:04.574 asserts 25 25 25 0 n/a 00:04:04.574 00:04:04.574 Elapsed time = 0.002 seconds 00:04:04.574 EAL: Cannot find device (10000:00:01.0) 00:04:04.574 EAL: Failed to attach device on primary process 00:04:04.574 00:04:04.574 real 0m0.023s 00:04:04.574 user 0m0.014s 00:04:04.574 sys 0m0.009s 00:04:04.574 13:43:03 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:04.574 13:43:03 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:04.574 ************************************ 00:04:04.574 END TEST env_pci 00:04:04.574 ************************************ 00:04:04.574 13:43:03 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:04.574 13:43:03 env -- env/env.sh@15 -- # uname 00:04:04.574 13:43:03 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:04.574 13:43:03 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:04.574 13:43:03 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:04.574 13:43:03 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:04.574 13:43:03 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:04.574 13:43:03 env -- common/autotest_common.sh@10 -- # set +x 00:04:04.574 ************************************ 00:04:04.574 START TEST env_dpdk_post_init 00:04:04.574 ************************************ 00:04:04.574 13:43:03 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:04.574 EAL: Detected CPU lcores: 10 00:04:04.574 EAL: Detected NUMA nodes: 1 00:04:04.574 EAL: Detected shared linkage of DPDK 00:04:04.574 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:04.574 EAL: Selected IOVA mode 'PA' 00:04:04.574 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:04.834 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:04.834 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:04.834 Starting DPDK initialization... 00:04:04.834 Starting SPDK post initialization... 00:04:04.834 SPDK NVMe probe 00:04:04.834 Attaching to 0000:00:10.0 00:04:04.834 Attaching to 0000:00:11.0 00:04:04.834 Attached to 0000:00:10.0 00:04:04.834 Attached to 0000:00:11.0 00:04:04.834 Cleaning up... 00:04:04.834 00:04:04.834 real 0m0.183s 00:04:04.834 user 0m0.051s 00:04:04.834 sys 0m0.032s 00:04:04.834 13:43:04 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:04.834 13:43:04 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:04.834 ************************************ 00:04:04.834 END TEST env_dpdk_post_init 00:04:04.834 ************************************ 00:04:04.834 13:43:04 env -- env/env.sh@26 -- # uname 00:04:04.834 13:43:04 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:04.834 13:43:04 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:04.834 13:43:04 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:04.834 13:43:04 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:04.834 13:43:04 env -- common/autotest_common.sh@10 -- # set +x 00:04:04.834 ************************************ 00:04:04.834 START TEST env_mem_callbacks 00:04:04.834 ************************************ 00:04:04.834 13:43:04 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:04.834 EAL: Detected CPU lcores: 10 00:04:04.834 EAL: Detected NUMA nodes: 1 00:04:04.834 EAL: Detected shared linkage of DPDK 00:04:04.834 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:04.834 EAL: Selected IOVA mode 'PA' 00:04:04.834 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:04.834 00:04:04.834 00:04:04.834 CUnit - A unit testing framework for C - Version 2.1-3 00:04:04.834 http://cunit.sourceforge.net/ 00:04:04.834 00:04:04.834 00:04:04.834 Suite: memory 00:04:04.834 Test: test ... 00:04:04.834 register 0x200000200000 2097152 00:04:04.834 malloc 3145728 00:04:04.834 register 0x200000400000 4194304 00:04:04.834 buf 0x200000500000 len 3145728 PASSED 00:04:04.834 malloc 64 00:04:04.834 buf 0x2000004fff40 len 64 PASSED 00:04:04.834 malloc 4194304 00:04:04.834 register 0x200000800000 6291456 00:04:04.834 buf 0x200000a00000 len 4194304 PASSED 00:04:04.834 free 0x200000500000 3145728 00:04:04.834 free 0x2000004fff40 64 00:04:04.834 unregister 0x200000400000 4194304 PASSED 00:04:04.834 free 0x200000a00000 4194304 00:04:04.834 unregister 0x200000800000 6291456 PASSED 00:04:04.834 malloc 8388608 00:04:04.834 register 0x200000400000 10485760 00:04:04.834 buf 0x200000600000 len 8388608 PASSED 00:04:04.834 free 0x200000600000 8388608 00:04:04.834 unregister 0x200000400000 10485760 PASSED 00:04:04.834 passed 00:04:04.834 00:04:04.834 Run Summary: Type Total Ran Passed Failed Inactive 00:04:04.834 suites 1 1 n/a 0 0 00:04:04.834 tests 1 1 1 0 0 00:04:04.834 asserts 15 15 15 0 n/a 00:04:04.834 00:04:04.834 Elapsed time = 0.007 seconds 00:04:04.834 00:04:04.834 real 0m0.142s 00:04:04.834 user 0m0.017s 00:04:04.834 sys 0m0.024s 00:04:04.834 13:43:04 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:04.834 13:43:04 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:04.834 ************************************ 00:04:04.834 END TEST env_mem_callbacks 00:04:04.834 ************************************ 00:04:05.093 00:04:05.093 real 0m2.936s 00:04:05.093 user 0m1.595s 00:04:05.093 sys 0m0.995s 00:04:05.093 13:43:04 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:05.093 ************************************ 00:04:05.093 END TEST env 00:04:05.093 13:43:04 env -- common/autotest_common.sh@10 -- # set +x 00:04:05.093 ************************************ 00:04:05.093 13:43:04 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:05.093 13:43:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:05.093 13:43:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:05.093 13:43:04 -- common/autotest_common.sh@10 -- # set +x 00:04:05.093 ************************************ 00:04:05.093 START TEST rpc 00:04:05.093 ************************************ 00:04:05.093 13:43:04 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:05.093 * Looking for test storage... 00:04:05.093 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:05.093 13:43:04 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:05.093 13:43:04 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:05.093 13:43:04 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:05.093 13:43:04 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:05.093 13:43:04 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:05.093 13:43:04 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:05.093 13:43:04 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:05.093 13:43:04 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:05.093 13:43:04 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:05.093 13:43:04 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:05.093 13:43:04 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:05.093 13:43:04 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:05.093 13:43:04 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:05.093 13:43:04 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:05.093 13:43:04 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:05.094 13:43:04 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:05.094 13:43:04 rpc -- scripts/common.sh@345 -- # : 1 00:04:05.094 13:43:04 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:05.094 13:43:04 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:05.094 13:43:04 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:05.094 13:43:04 rpc -- scripts/common.sh@353 -- # local d=1 00:04:05.094 13:43:04 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:05.094 13:43:04 rpc -- scripts/common.sh@355 -- # echo 1 00:04:05.094 13:43:04 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:05.094 13:43:04 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:05.094 13:43:04 rpc -- scripts/common.sh@353 -- # local d=2 00:04:05.094 13:43:04 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:05.094 13:43:04 rpc -- scripts/common.sh@355 -- # echo 2 00:04:05.094 13:43:04 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:05.094 13:43:04 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:05.094 13:43:04 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:05.094 13:43:04 rpc -- scripts/common.sh@368 -- # return 0 00:04:05.094 13:43:04 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:05.094 13:43:04 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:05.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.094 --rc genhtml_branch_coverage=1 00:04:05.094 --rc genhtml_function_coverage=1 00:04:05.094 --rc genhtml_legend=1 00:04:05.094 --rc geninfo_all_blocks=1 00:04:05.094 --rc geninfo_unexecuted_blocks=1 00:04:05.094 00:04:05.094 ' 00:04:05.094 13:43:04 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:05.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.094 --rc genhtml_branch_coverage=1 00:04:05.094 --rc genhtml_function_coverage=1 00:04:05.094 --rc genhtml_legend=1 00:04:05.094 --rc geninfo_all_blocks=1 00:04:05.094 --rc geninfo_unexecuted_blocks=1 00:04:05.094 00:04:05.094 ' 00:04:05.094 13:43:04 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:05.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.094 --rc genhtml_branch_coverage=1 00:04:05.094 --rc genhtml_function_coverage=1 00:04:05.094 --rc genhtml_legend=1 00:04:05.094 --rc geninfo_all_blocks=1 00:04:05.094 --rc geninfo_unexecuted_blocks=1 00:04:05.094 00:04:05.094 ' 00:04:05.094 13:43:04 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:05.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.094 --rc genhtml_branch_coverage=1 00:04:05.094 --rc genhtml_function_coverage=1 00:04:05.094 --rc genhtml_legend=1 00:04:05.094 --rc geninfo_all_blocks=1 00:04:05.094 --rc geninfo_unexecuted_blocks=1 00:04:05.094 00:04:05.094 ' 00:04:05.094 13:43:04 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56684 00:04:05.094 13:43:04 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:05.094 13:43:04 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:05.094 13:43:04 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56684 00:04:05.094 13:43:04 rpc -- common/autotest_common.sh@835 -- # '[' -z 56684 ']' 00:04:05.094 13:43:04 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:05.094 13:43:04 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:05.094 13:43:04 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:05.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:05.094 13:43:04 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:05.094 13:43:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:05.353 [2024-12-06 13:43:04.555169] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:04:05.353 [2024-12-06 13:43:04.555260] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56684 ] 00:04:05.353 [2024-12-06 13:43:04.708028] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:05.613 [2024-12-06 13:43:04.764607] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:05.613 [2024-12-06 13:43:04.764677] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56684' to capture a snapshot of events at runtime. 00:04:05.613 [2024-12-06 13:43:04.764691] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:05.613 [2024-12-06 13:43:04.764702] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:05.613 [2024-12-06 13:43:04.764710] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56684 for offline analysis/debug. 00:04:05.613 [2024-12-06 13:43:04.765238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:05.613 [2024-12-06 13:43:04.863840] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:05.872 13:43:05 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:05.872 13:43:05 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:05.872 13:43:05 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:05.872 13:43:05 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:05.872 13:43:05 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:05.872 13:43:05 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:05.872 13:43:05 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:05.872 13:43:05 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:05.872 13:43:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:05.872 ************************************ 00:04:05.872 START TEST rpc_integrity 00:04:05.872 ************************************ 00:04:05.872 13:43:05 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:05.872 13:43:05 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:05.872 13:43:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:05.872 13:43:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:05.872 13:43:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:05.872 13:43:05 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:05.872 13:43:05 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:05.872 13:43:05 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:05.872 13:43:05 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:05.872 13:43:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:05.872 13:43:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:05.872 13:43:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:05.872 13:43:05 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:05.872 13:43:05 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:05.872 13:43:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:05.872 13:43:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:05.872 13:43:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:05.872 13:43:05 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:05.872 { 00:04:05.872 "name": "Malloc0", 00:04:05.872 "aliases": [ 00:04:05.872 "bcb15b4c-3e78-4edb-ae8d-59abfab6f972" 00:04:05.872 ], 00:04:05.872 "product_name": "Malloc disk", 00:04:05.872 "block_size": 512, 00:04:05.872 "num_blocks": 16384, 00:04:05.872 "uuid": "bcb15b4c-3e78-4edb-ae8d-59abfab6f972", 00:04:05.872 "assigned_rate_limits": { 00:04:05.872 "rw_ios_per_sec": 0, 00:04:05.872 "rw_mbytes_per_sec": 0, 00:04:05.872 "r_mbytes_per_sec": 0, 00:04:05.872 "w_mbytes_per_sec": 0 00:04:05.872 }, 00:04:05.872 "claimed": false, 00:04:05.872 "zoned": false, 00:04:05.872 "supported_io_types": { 00:04:05.872 "read": true, 00:04:05.872 "write": true, 00:04:05.872 "unmap": true, 00:04:05.872 "flush": true, 00:04:05.872 "reset": true, 00:04:05.872 "nvme_admin": false, 00:04:05.872 "nvme_io": false, 00:04:05.872 "nvme_io_md": false, 00:04:05.872 "write_zeroes": true, 00:04:05.872 "zcopy": true, 00:04:05.872 "get_zone_info": false, 00:04:05.872 "zone_management": false, 00:04:05.872 "zone_append": false, 00:04:05.872 "compare": false, 00:04:05.872 "compare_and_write": false, 00:04:05.872 "abort": true, 00:04:05.872 "seek_hole": false, 00:04:05.872 "seek_data": false, 00:04:05.872 "copy": true, 00:04:05.872 "nvme_iov_md": false 00:04:05.872 }, 00:04:05.872 "memory_domains": [ 00:04:05.872 { 00:04:05.872 "dma_device_id": "system", 00:04:05.872 "dma_device_type": 1 00:04:05.872 }, 00:04:05.872 { 00:04:05.872 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:05.872 "dma_device_type": 2 00:04:05.872 } 00:04:05.872 ], 00:04:05.872 "driver_specific": {} 00:04:05.872 } 00:04:05.873 ]' 00:04:05.873 13:43:05 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:06.133 13:43:05 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:06.133 13:43:05 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:06.133 13:43:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.133 13:43:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.133 [2024-12-06 13:43:05.291640] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:06.133 [2024-12-06 13:43:05.291684] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:06.133 [2024-12-06 13:43:05.291700] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c92b90 00:04:06.133 [2024-12-06 13:43:05.291708] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:06.133 [2024-12-06 13:43:05.293091] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:06.133 [2024-12-06 13:43:05.293129] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:06.133 Passthru0 00:04:06.133 13:43:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:06.133 13:43:05 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:06.133 13:43:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.133 13:43:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.133 13:43:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:06.133 13:43:05 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:06.133 { 00:04:06.133 "name": "Malloc0", 00:04:06.133 "aliases": [ 00:04:06.133 "bcb15b4c-3e78-4edb-ae8d-59abfab6f972" 00:04:06.133 ], 00:04:06.133 "product_name": "Malloc disk", 00:04:06.133 "block_size": 512, 00:04:06.133 "num_blocks": 16384, 00:04:06.133 "uuid": "bcb15b4c-3e78-4edb-ae8d-59abfab6f972", 00:04:06.133 "assigned_rate_limits": { 00:04:06.133 "rw_ios_per_sec": 0, 00:04:06.133 "rw_mbytes_per_sec": 0, 00:04:06.133 "r_mbytes_per_sec": 0, 00:04:06.133 "w_mbytes_per_sec": 0 00:04:06.133 }, 00:04:06.133 "claimed": true, 00:04:06.133 "claim_type": "exclusive_write", 00:04:06.133 "zoned": false, 00:04:06.133 "supported_io_types": { 00:04:06.133 "read": true, 00:04:06.133 "write": true, 00:04:06.133 "unmap": true, 00:04:06.133 "flush": true, 00:04:06.133 "reset": true, 00:04:06.133 "nvme_admin": false, 00:04:06.133 "nvme_io": false, 00:04:06.133 "nvme_io_md": false, 00:04:06.133 "write_zeroes": true, 00:04:06.133 "zcopy": true, 00:04:06.133 "get_zone_info": false, 00:04:06.133 "zone_management": false, 00:04:06.133 "zone_append": false, 00:04:06.133 "compare": false, 00:04:06.133 "compare_and_write": false, 00:04:06.133 "abort": true, 00:04:06.133 "seek_hole": false, 00:04:06.133 "seek_data": false, 00:04:06.133 "copy": true, 00:04:06.133 "nvme_iov_md": false 00:04:06.133 }, 00:04:06.133 "memory_domains": [ 00:04:06.133 { 00:04:06.133 "dma_device_id": "system", 00:04:06.133 "dma_device_type": 1 00:04:06.133 }, 00:04:06.133 { 00:04:06.133 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:06.133 "dma_device_type": 2 00:04:06.133 } 00:04:06.133 ], 00:04:06.133 "driver_specific": {} 00:04:06.133 }, 00:04:06.133 { 00:04:06.133 "name": "Passthru0", 00:04:06.133 "aliases": [ 00:04:06.133 "b2ebe513-886b-5116-81a4-569c00ef4efd" 00:04:06.133 ], 00:04:06.133 "product_name": "passthru", 00:04:06.133 "block_size": 512, 00:04:06.133 "num_blocks": 16384, 00:04:06.133 "uuid": "b2ebe513-886b-5116-81a4-569c00ef4efd", 00:04:06.133 "assigned_rate_limits": { 00:04:06.133 "rw_ios_per_sec": 0, 00:04:06.133 "rw_mbytes_per_sec": 0, 00:04:06.133 "r_mbytes_per_sec": 0, 00:04:06.133 "w_mbytes_per_sec": 0 00:04:06.133 }, 00:04:06.133 "claimed": false, 00:04:06.133 "zoned": false, 00:04:06.133 "supported_io_types": { 00:04:06.133 "read": true, 00:04:06.133 "write": true, 00:04:06.133 "unmap": true, 00:04:06.133 "flush": true, 00:04:06.133 "reset": true, 00:04:06.133 "nvme_admin": false, 00:04:06.133 "nvme_io": false, 00:04:06.133 "nvme_io_md": false, 00:04:06.133 "write_zeroes": true, 00:04:06.133 "zcopy": true, 00:04:06.133 "get_zone_info": false, 00:04:06.133 "zone_management": false, 00:04:06.133 "zone_append": false, 00:04:06.133 "compare": false, 00:04:06.134 "compare_and_write": false, 00:04:06.134 "abort": true, 00:04:06.134 "seek_hole": false, 00:04:06.134 "seek_data": false, 00:04:06.134 "copy": true, 00:04:06.134 "nvme_iov_md": false 00:04:06.134 }, 00:04:06.134 "memory_domains": [ 00:04:06.134 { 00:04:06.134 "dma_device_id": "system", 00:04:06.134 "dma_device_type": 1 00:04:06.134 }, 00:04:06.134 { 00:04:06.134 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:06.134 "dma_device_type": 2 00:04:06.134 } 00:04:06.134 ], 00:04:06.134 "driver_specific": { 00:04:06.134 "passthru": { 00:04:06.134 "name": "Passthru0", 00:04:06.134 "base_bdev_name": "Malloc0" 00:04:06.134 } 00:04:06.134 } 00:04:06.134 } 00:04:06.134 ]' 00:04:06.134 13:43:05 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:06.134 13:43:05 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:06.134 13:43:05 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:06.134 13:43:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.134 13:43:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.134 13:43:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:06.134 13:43:05 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:06.134 13:43:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.134 13:43:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.134 13:43:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:06.134 13:43:05 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:06.134 13:43:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.134 13:43:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.134 13:43:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:06.134 13:43:05 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:06.134 13:43:05 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:06.134 13:43:05 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:06.134 00:04:06.134 real 0m0.312s 00:04:06.134 user 0m0.224s 00:04:06.134 sys 0m0.027s 00:04:06.134 13:43:05 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:06.134 ************************************ 00:04:06.134 13:43:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.134 END TEST rpc_integrity 00:04:06.134 ************************************ 00:04:06.134 13:43:05 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:06.134 13:43:05 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:06.134 13:43:05 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:06.134 13:43:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:06.134 ************************************ 00:04:06.134 START TEST rpc_plugins 00:04:06.134 ************************************ 00:04:06.134 13:43:05 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:06.134 13:43:05 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:06.134 13:43:05 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.134 13:43:05 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:06.134 13:43:05 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:06.134 13:43:05 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:06.134 13:43:05 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:06.134 13:43:05 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.134 13:43:05 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:06.392 13:43:05 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:06.392 13:43:05 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:06.392 { 00:04:06.392 "name": "Malloc1", 00:04:06.392 "aliases": [ 00:04:06.392 "c0746a6a-2da8-4dea-a2e2-7adc33c2d3ce" 00:04:06.392 ], 00:04:06.392 "product_name": "Malloc disk", 00:04:06.392 "block_size": 4096, 00:04:06.392 "num_blocks": 256, 00:04:06.392 "uuid": "c0746a6a-2da8-4dea-a2e2-7adc33c2d3ce", 00:04:06.392 "assigned_rate_limits": { 00:04:06.392 "rw_ios_per_sec": 0, 00:04:06.392 "rw_mbytes_per_sec": 0, 00:04:06.392 "r_mbytes_per_sec": 0, 00:04:06.392 "w_mbytes_per_sec": 0 00:04:06.392 }, 00:04:06.392 "claimed": false, 00:04:06.392 "zoned": false, 00:04:06.392 "supported_io_types": { 00:04:06.392 "read": true, 00:04:06.392 "write": true, 00:04:06.392 "unmap": true, 00:04:06.392 "flush": true, 00:04:06.392 "reset": true, 00:04:06.392 "nvme_admin": false, 00:04:06.392 "nvme_io": false, 00:04:06.392 "nvme_io_md": false, 00:04:06.392 "write_zeroes": true, 00:04:06.392 "zcopy": true, 00:04:06.392 "get_zone_info": false, 00:04:06.392 "zone_management": false, 00:04:06.392 "zone_append": false, 00:04:06.392 "compare": false, 00:04:06.392 "compare_and_write": false, 00:04:06.392 "abort": true, 00:04:06.392 "seek_hole": false, 00:04:06.392 "seek_data": false, 00:04:06.392 "copy": true, 00:04:06.392 "nvme_iov_md": false 00:04:06.392 }, 00:04:06.392 "memory_domains": [ 00:04:06.392 { 00:04:06.392 "dma_device_id": "system", 00:04:06.392 "dma_device_type": 1 00:04:06.392 }, 00:04:06.392 { 00:04:06.392 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:06.392 "dma_device_type": 2 00:04:06.392 } 00:04:06.392 ], 00:04:06.392 "driver_specific": {} 00:04:06.392 } 00:04:06.392 ]' 00:04:06.392 13:43:05 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:06.392 13:43:05 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:06.392 13:43:05 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:06.392 13:43:05 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.392 13:43:05 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:06.392 13:43:05 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:06.392 13:43:05 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:06.392 13:43:05 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.392 13:43:05 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:06.392 13:43:05 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:06.392 13:43:05 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:06.393 13:43:05 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:06.393 13:43:05 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:06.393 00:04:06.393 real 0m0.157s 00:04:06.393 user 0m0.108s 00:04:06.393 sys 0m0.018s 00:04:06.393 13:43:05 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:06.393 ************************************ 00:04:06.393 13:43:05 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:06.393 END TEST rpc_plugins 00:04:06.393 ************************************ 00:04:06.393 13:43:05 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:06.393 13:43:05 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:06.393 13:43:05 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:06.393 13:43:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:06.393 ************************************ 00:04:06.393 START TEST rpc_trace_cmd_test 00:04:06.393 ************************************ 00:04:06.393 13:43:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:06.393 13:43:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:06.393 13:43:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:06.393 13:43:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.393 13:43:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:06.393 13:43:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:06.393 13:43:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:06.393 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56684", 00:04:06.393 "tpoint_group_mask": "0x8", 00:04:06.393 "iscsi_conn": { 00:04:06.393 "mask": "0x2", 00:04:06.393 "tpoint_mask": "0x0" 00:04:06.393 }, 00:04:06.393 "scsi": { 00:04:06.393 "mask": "0x4", 00:04:06.393 "tpoint_mask": "0x0" 00:04:06.393 }, 00:04:06.393 "bdev": { 00:04:06.393 "mask": "0x8", 00:04:06.393 "tpoint_mask": "0xffffffffffffffff" 00:04:06.393 }, 00:04:06.393 "nvmf_rdma": { 00:04:06.393 "mask": "0x10", 00:04:06.393 "tpoint_mask": "0x0" 00:04:06.393 }, 00:04:06.393 "nvmf_tcp": { 00:04:06.393 "mask": "0x20", 00:04:06.393 "tpoint_mask": "0x0" 00:04:06.393 }, 00:04:06.393 "ftl": { 00:04:06.393 "mask": "0x40", 00:04:06.393 "tpoint_mask": "0x0" 00:04:06.393 }, 00:04:06.393 "blobfs": { 00:04:06.393 "mask": "0x80", 00:04:06.393 "tpoint_mask": "0x0" 00:04:06.393 }, 00:04:06.393 "dsa": { 00:04:06.393 "mask": "0x200", 00:04:06.393 "tpoint_mask": "0x0" 00:04:06.393 }, 00:04:06.393 "thread": { 00:04:06.393 "mask": "0x400", 00:04:06.393 "tpoint_mask": "0x0" 00:04:06.393 }, 00:04:06.393 "nvme_pcie": { 00:04:06.393 "mask": "0x800", 00:04:06.393 "tpoint_mask": "0x0" 00:04:06.393 }, 00:04:06.393 "iaa": { 00:04:06.393 "mask": "0x1000", 00:04:06.393 "tpoint_mask": "0x0" 00:04:06.393 }, 00:04:06.393 "nvme_tcp": { 00:04:06.393 "mask": "0x2000", 00:04:06.393 "tpoint_mask": "0x0" 00:04:06.393 }, 00:04:06.393 "bdev_nvme": { 00:04:06.393 "mask": "0x4000", 00:04:06.393 "tpoint_mask": "0x0" 00:04:06.393 }, 00:04:06.393 "sock": { 00:04:06.393 "mask": "0x8000", 00:04:06.393 "tpoint_mask": "0x0" 00:04:06.393 }, 00:04:06.393 "blob": { 00:04:06.393 "mask": "0x10000", 00:04:06.393 "tpoint_mask": "0x0" 00:04:06.393 }, 00:04:06.393 "bdev_raid": { 00:04:06.393 "mask": "0x20000", 00:04:06.393 "tpoint_mask": "0x0" 00:04:06.393 }, 00:04:06.393 "scheduler": { 00:04:06.393 "mask": "0x40000", 00:04:06.393 "tpoint_mask": "0x0" 00:04:06.393 } 00:04:06.393 }' 00:04:06.393 13:43:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:06.393 13:43:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:06.393 13:43:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:06.652 13:43:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:06.652 13:43:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:06.652 13:43:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:06.652 13:43:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:06.652 13:43:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:06.652 13:43:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:06.652 13:43:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:06.652 00:04:06.652 real 0m0.278s 00:04:06.652 user 0m0.239s 00:04:06.652 sys 0m0.032s 00:04:06.652 13:43:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:06.652 ************************************ 00:04:06.652 END TEST rpc_trace_cmd_test 00:04:06.652 ************************************ 00:04:06.652 13:43:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:06.652 13:43:06 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:06.652 13:43:06 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:06.652 13:43:06 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:06.652 13:43:06 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:06.652 13:43:06 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:06.652 13:43:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:06.652 ************************************ 00:04:06.652 START TEST rpc_daemon_integrity 00:04:06.652 ************************************ 00:04:06.652 13:43:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:06.652 13:43:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:06.652 13:43:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.652 13:43:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.652 13:43:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:06.652 13:43:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:06.911 13:43:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:06.911 13:43:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:06.911 13:43:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:06.911 13:43:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.911 13:43:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.911 13:43:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:06.911 13:43:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:06.911 13:43:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:06.911 13:43:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.911 13:43:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.911 13:43:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:06.911 13:43:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:06.911 { 00:04:06.911 "name": "Malloc2", 00:04:06.911 "aliases": [ 00:04:06.911 "0b611527-374f-472e-884e-2c9fbeb5ab5a" 00:04:06.911 ], 00:04:06.911 "product_name": "Malloc disk", 00:04:06.911 "block_size": 512, 00:04:06.911 "num_blocks": 16384, 00:04:06.911 "uuid": "0b611527-374f-472e-884e-2c9fbeb5ab5a", 00:04:06.911 "assigned_rate_limits": { 00:04:06.911 "rw_ios_per_sec": 0, 00:04:06.911 "rw_mbytes_per_sec": 0, 00:04:06.911 "r_mbytes_per_sec": 0, 00:04:06.911 "w_mbytes_per_sec": 0 00:04:06.911 }, 00:04:06.911 "claimed": false, 00:04:06.911 "zoned": false, 00:04:06.911 "supported_io_types": { 00:04:06.911 "read": true, 00:04:06.911 "write": true, 00:04:06.911 "unmap": true, 00:04:06.911 "flush": true, 00:04:06.911 "reset": true, 00:04:06.911 "nvme_admin": false, 00:04:06.911 "nvme_io": false, 00:04:06.911 "nvme_io_md": false, 00:04:06.911 "write_zeroes": true, 00:04:06.911 "zcopy": true, 00:04:06.911 "get_zone_info": false, 00:04:06.911 "zone_management": false, 00:04:06.912 "zone_append": false, 00:04:06.912 "compare": false, 00:04:06.912 "compare_and_write": false, 00:04:06.912 "abort": true, 00:04:06.912 "seek_hole": false, 00:04:06.912 "seek_data": false, 00:04:06.912 "copy": true, 00:04:06.912 "nvme_iov_md": false 00:04:06.912 }, 00:04:06.912 "memory_domains": [ 00:04:06.912 { 00:04:06.912 "dma_device_id": "system", 00:04:06.912 "dma_device_type": 1 00:04:06.912 }, 00:04:06.912 { 00:04:06.912 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:06.912 "dma_device_type": 2 00:04:06.912 } 00:04:06.912 ], 00:04:06.912 "driver_specific": {} 00:04:06.912 } 00:04:06.912 ]' 00:04:06.912 13:43:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:06.912 13:43:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:06.912 13:43:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:06.912 13:43:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.912 13:43:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.912 [2024-12-06 13:43:06.195863] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:06.912 [2024-12-06 13:43:06.196105] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:06.912 [2024-12-06 13:43:06.196141] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1cf8440 00:04:06.912 [2024-12-06 13:43:06.196151] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:06.912 [2024-12-06 13:43:06.197900] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:06.912 [2024-12-06 13:43:06.197932] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:06.912 Passthru0 00:04:06.912 13:43:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:06.912 13:43:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:06.912 13:43:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.912 13:43:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.912 13:43:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:06.912 13:43:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:06.912 { 00:04:06.912 "name": "Malloc2", 00:04:06.912 "aliases": [ 00:04:06.912 "0b611527-374f-472e-884e-2c9fbeb5ab5a" 00:04:06.912 ], 00:04:06.912 "product_name": "Malloc disk", 00:04:06.912 "block_size": 512, 00:04:06.912 "num_blocks": 16384, 00:04:06.912 "uuid": "0b611527-374f-472e-884e-2c9fbeb5ab5a", 00:04:06.912 "assigned_rate_limits": { 00:04:06.912 "rw_ios_per_sec": 0, 00:04:06.912 "rw_mbytes_per_sec": 0, 00:04:06.912 "r_mbytes_per_sec": 0, 00:04:06.912 "w_mbytes_per_sec": 0 00:04:06.912 }, 00:04:06.912 "claimed": true, 00:04:06.912 "claim_type": "exclusive_write", 00:04:06.912 "zoned": false, 00:04:06.912 "supported_io_types": { 00:04:06.912 "read": true, 00:04:06.912 "write": true, 00:04:06.912 "unmap": true, 00:04:06.912 "flush": true, 00:04:06.912 "reset": true, 00:04:06.912 "nvme_admin": false, 00:04:06.912 "nvme_io": false, 00:04:06.912 "nvme_io_md": false, 00:04:06.912 "write_zeroes": true, 00:04:06.912 "zcopy": true, 00:04:06.912 "get_zone_info": false, 00:04:06.912 "zone_management": false, 00:04:06.912 "zone_append": false, 00:04:06.912 "compare": false, 00:04:06.912 "compare_and_write": false, 00:04:06.912 "abort": true, 00:04:06.912 "seek_hole": false, 00:04:06.912 "seek_data": false, 00:04:06.912 "copy": true, 00:04:06.912 "nvme_iov_md": false 00:04:06.912 }, 00:04:06.912 "memory_domains": [ 00:04:06.912 { 00:04:06.912 "dma_device_id": "system", 00:04:06.912 "dma_device_type": 1 00:04:06.912 }, 00:04:06.912 { 00:04:06.912 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:06.912 "dma_device_type": 2 00:04:06.912 } 00:04:06.912 ], 00:04:06.912 "driver_specific": {} 00:04:06.912 }, 00:04:06.912 { 00:04:06.912 "name": "Passthru0", 00:04:06.912 "aliases": [ 00:04:06.912 "82cda978-26ae-5779-a544-0fef8f33f2b9" 00:04:06.912 ], 00:04:06.912 "product_name": "passthru", 00:04:06.912 "block_size": 512, 00:04:06.912 "num_blocks": 16384, 00:04:06.912 "uuid": "82cda978-26ae-5779-a544-0fef8f33f2b9", 00:04:06.912 "assigned_rate_limits": { 00:04:06.912 "rw_ios_per_sec": 0, 00:04:06.912 "rw_mbytes_per_sec": 0, 00:04:06.912 "r_mbytes_per_sec": 0, 00:04:06.912 "w_mbytes_per_sec": 0 00:04:06.912 }, 00:04:06.912 "claimed": false, 00:04:06.912 "zoned": false, 00:04:06.912 "supported_io_types": { 00:04:06.912 "read": true, 00:04:06.912 "write": true, 00:04:06.912 "unmap": true, 00:04:06.912 "flush": true, 00:04:06.912 "reset": true, 00:04:06.912 "nvme_admin": false, 00:04:06.912 "nvme_io": false, 00:04:06.912 "nvme_io_md": false, 00:04:06.912 "write_zeroes": true, 00:04:06.912 "zcopy": true, 00:04:06.912 "get_zone_info": false, 00:04:06.912 "zone_management": false, 00:04:06.912 "zone_append": false, 00:04:06.912 "compare": false, 00:04:06.912 "compare_and_write": false, 00:04:06.912 "abort": true, 00:04:06.912 "seek_hole": false, 00:04:06.912 "seek_data": false, 00:04:06.912 "copy": true, 00:04:06.912 "nvme_iov_md": false 00:04:06.912 }, 00:04:06.912 "memory_domains": [ 00:04:06.912 { 00:04:06.912 "dma_device_id": "system", 00:04:06.912 "dma_device_type": 1 00:04:06.912 }, 00:04:06.912 { 00:04:06.912 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:06.912 "dma_device_type": 2 00:04:06.912 } 00:04:06.912 ], 00:04:06.912 "driver_specific": { 00:04:06.912 "passthru": { 00:04:06.912 "name": "Passthru0", 00:04:06.912 "base_bdev_name": "Malloc2" 00:04:06.912 } 00:04:06.912 } 00:04:06.912 } 00:04:06.912 ]' 00:04:06.912 13:43:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:06.912 13:43:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:06.912 13:43:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:06.912 13:43:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.912 13:43:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.912 13:43:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:06.912 13:43:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:06.912 13:43:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.912 13:43:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.912 13:43:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:06.912 13:43:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:06.912 13:43:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.912 13:43:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.912 13:43:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:06.912 13:43:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:06.912 13:43:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:07.172 ************************************ 00:04:07.172 END TEST rpc_daemon_integrity 00:04:07.172 ************************************ 00:04:07.172 13:43:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:07.172 00:04:07.172 real 0m0.320s 00:04:07.172 user 0m0.210s 00:04:07.172 sys 0m0.046s 00:04:07.172 13:43:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:07.172 13:43:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.172 13:43:06 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:07.172 13:43:06 rpc -- rpc/rpc.sh@84 -- # killprocess 56684 00:04:07.172 13:43:06 rpc -- common/autotest_common.sh@954 -- # '[' -z 56684 ']' 00:04:07.172 13:43:06 rpc -- common/autotest_common.sh@958 -- # kill -0 56684 00:04:07.172 13:43:06 rpc -- common/autotest_common.sh@959 -- # uname 00:04:07.172 13:43:06 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:07.172 13:43:06 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56684 00:04:07.172 13:43:06 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:07.172 killing process with pid 56684 00:04:07.172 13:43:06 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:07.173 13:43:06 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56684' 00:04:07.173 13:43:06 rpc -- common/autotest_common.sh@973 -- # kill 56684 00:04:07.173 13:43:06 rpc -- common/autotest_common.sh@978 -- # wait 56684 00:04:07.741 ************************************ 00:04:07.741 END TEST rpc 00:04:07.741 ************************************ 00:04:07.741 00:04:07.741 real 0m2.636s 00:04:07.741 user 0m3.218s 00:04:07.741 sys 0m0.732s 00:04:07.741 13:43:06 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:07.741 13:43:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:07.741 13:43:06 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:07.741 13:43:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:07.741 13:43:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:07.741 13:43:06 -- common/autotest_common.sh@10 -- # set +x 00:04:07.741 ************************************ 00:04:07.741 START TEST skip_rpc 00:04:07.741 ************************************ 00:04:07.741 13:43:06 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:07.741 * Looking for test storage... 00:04:07.741 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:07.741 13:43:07 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:07.741 13:43:07 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:07.741 13:43:07 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:08.001 13:43:07 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:08.001 13:43:07 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:08.001 13:43:07 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:08.001 13:43:07 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:08.001 13:43:07 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:08.001 13:43:07 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:08.001 13:43:07 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:08.001 13:43:07 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:08.001 13:43:07 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:08.001 13:43:07 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:08.001 13:43:07 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:08.001 13:43:07 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:08.001 13:43:07 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:08.001 13:43:07 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:08.001 13:43:07 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:08.001 13:43:07 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:08.001 13:43:07 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:08.001 13:43:07 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:08.001 13:43:07 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:08.001 13:43:07 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:08.001 13:43:07 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:08.001 13:43:07 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:08.001 13:43:07 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:08.001 13:43:07 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:08.001 13:43:07 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:08.001 13:43:07 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:08.001 13:43:07 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:08.001 13:43:07 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:08.001 13:43:07 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:08.001 13:43:07 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:08.001 13:43:07 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:08.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.001 --rc genhtml_branch_coverage=1 00:04:08.001 --rc genhtml_function_coverage=1 00:04:08.001 --rc genhtml_legend=1 00:04:08.001 --rc geninfo_all_blocks=1 00:04:08.001 --rc geninfo_unexecuted_blocks=1 00:04:08.001 00:04:08.001 ' 00:04:08.001 13:43:07 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:08.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.001 --rc genhtml_branch_coverage=1 00:04:08.001 --rc genhtml_function_coverage=1 00:04:08.001 --rc genhtml_legend=1 00:04:08.001 --rc geninfo_all_blocks=1 00:04:08.001 --rc geninfo_unexecuted_blocks=1 00:04:08.001 00:04:08.001 ' 00:04:08.001 13:43:07 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:08.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.001 --rc genhtml_branch_coverage=1 00:04:08.001 --rc genhtml_function_coverage=1 00:04:08.001 --rc genhtml_legend=1 00:04:08.001 --rc geninfo_all_blocks=1 00:04:08.001 --rc geninfo_unexecuted_blocks=1 00:04:08.001 00:04:08.001 ' 00:04:08.001 13:43:07 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:08.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.001 --rc genhtml_branch_coverage=1 00:04:08.001 --rc genhtml_function_coverage=1 00:04:08.001 --rc genhtml_legend=1 00:04:08.001 --rc geninfo_all_blocks=1 00:04:08.001 --rc geninfo_unexecuted_blocks=1 00:04:08.001 00:04:08.001 ' 00:04:08.001 13:43:07 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:08.001 13:43:07 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:08.001 13:43:07 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:08.001 13:43:07 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:08.001 13:43:07 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:08.001 13:43:07 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:08.001 ************************************ 00:04:08.001 START TEST skip_rpc 00:04:08.001 ************************************ 00:04:08.001 13:43:07 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:08.001 13:43:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=56883 00:04:08.001 13:43:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:08.001 13:43:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:08.001 13:43:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:08.001 [2024-12-06 13:43:07.262218] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:04:08.001 [2024-12-06 13:43:07.262510] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56883 ] 00:04:08.261 [2024-12-06 13:43:07.409415] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:08.261 [2024-12-06 13:43:07.454054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:08.261 [2024-12-06 13:43:07.541531] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:13.534 13:43:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:13.534 13:43:12 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:13.534 13:43:12 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:13.534 13:43:12 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:13.534 13:43:12 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:13.534 13:43:12 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:13.534 13:43:12 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:13.534 13:43:12 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:13.534 13:43:12 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:13.534 13:43:12 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:13.534 13:43:12 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:13.534 13:43:12 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:13.534 13:43:12 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:13.534 13:43:12 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:13.534 13:43:12 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:13.534 13:43:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:13.534 13:43:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 56883 00:04:13.534 13:43:12 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 56883 ']' 00:04:13.534 13:43:12 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 56883 00:04:13.534 13:43:12 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:13.534 13:43:12 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:13.534 13:43:12 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56883 00:04:13.534 13:43:12 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:13.534 13:43:12 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:13.534 13:43:12 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56883' 00:04:13.534 killing process with pid 56883 00:04:13.534 13:43:12 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 56883 00:04:13.534 13:43:12 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 56883 00:04:13.534 00:04:13.534 real 0m5.540s 00:04:13.534 user 0m5.110s 00:04:13.534 sys 0m0.350s 00:04:13.534 13:43:12 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:13.534 13:43:12 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:13.534 ************************************ 00:04:13.534 END TEST skip_rpc 00:04:13.534 ************************************ 00:04:13.534 13:43:12 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:13.534 13:43:12 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:13.534 13:43:12 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:13.534 13:43:12 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:13.534 ************************************ 00:04:13.534 START TEST skip_rpc_with_json 00:04:13.534 ************************************ 00:04:13.534 13:43:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:13.534 13:43:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:13.534 13:43:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=56969 00:04:13.534 13:43:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:13.534 13:43:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:13.534 13:43:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 56969 00:04:13.534 13:43:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 56969 ']' 00:04:13.534 13:43:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:13.534 13:43:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:13.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:13.534 13:43:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:13.534 13:43:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:13.534 13:43:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:13.534 [2024-12-06 13:43:12.857768] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:04:13.534 [2024-12-06 13:43:12.858321] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56969 ] 00:04:13.794 [2024-12-06 13:43:12.998705] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:13.794 [2024-12-06 13:43:13.047010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:13.794 [2024-12-06 13:43:13.135043] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:14.731 13:43:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:14.731 13:43:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:14.731 13:43:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:14.731 13:43:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.731 13:43:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:14.731 [2024-12-06 13:43:13.786909] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:14.731 request: 00:04:14.731 { 00:04:14.731 "trtype": "tcp", 00:04:14.731 "method": "nvmf_get_transports", 00:04:14.731 "req_id": 1 00:04:14.731 } 00:04:14.731 Got JSON-RPC error response 00:04:14.731 response: 00:04:14.731 { 00:04:14.731 "code": -19, 00:04:14.731 "message": "No such device" 00:04:14.731 } 00:04:14.731 13:43:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:14.731 13:43:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:14.731 13:43:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.731 13:43:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:14.731 [2024-12-06 13:43:13.798999] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:14.731 13:43:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.731 13:43:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:14.731 13:43:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.731 13:43:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:14.731 13:43:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.731 13:43:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:14.731 { 00:04:14.731 "subsystems": [ 00:04:14.731 { 00:04:14.731 "subsystem": "fsdev", 00:04:14.731 "config": [ 00:04:14.731 { 00:04:14.731 "method": "fsdev_set_opts", 00:04:14.731 "params": { 00:04:14.731 "fsdev_io_pool_size": 65535, 00:04:14.731 "fsdev_io_cache_size": 256 00:04:14.731 } 00:04:14.731 } 00:04:14.731 ] 00:04:14.731 }, 00:04:14.731 { 00:04:14.731 "subsystem": "keyring", 00:04:14.731 "config": [] 00:04:14.731 }, 00:04:14.731 { 00:04:14.731 "subsystem": "iobuf", 00:04:14.731 "config": [ 00:04:14.731 { 00:04:14.731 "method": "iobuf_set_options", 00:04:14.731 "params": { 00:04:14.731 "small_pool_count": 8192, 00:04:14.731 "large_pool_count": 1024, 00:04:14.731 "small_bufsize": 8192, 00:04:14.731 "large_bufsize": 135168, 00:04:14.731 "enable_numa": false 00:04:14.731 } 00:04:14.731 } 00:04:14.731 ] 00:04:14.731 }, 00:04:14.731 { 00:04:14.731 "subsystem": "sock", 00:04:14.731 "config": [ 00:04:14.731 { 00:04:14.731 "method": "sock_set_default_impl", 00:04:14.731 "params": { 00:04:14.731 "impl_name": "uring" 00:04:14.731 } 00:04:14.731 }, 00:04:14.731 { 00:04:14.731 "method": "sock_impl_set_options", 00:04:14.731 "params": { 00:04:14.731 "impl_name": "ssl", 00:04:14.731 "recv_buf_size": 4096, 00:04:14.731 "send_buf_size": 4096, 00:04:14.731 "enable_recv_pipe": true, 00:04:14.731 "enable_quickack": false, 00:04:14.731 "enable_placement_id": 0, 00:04:14.731 "enable_zerocopy_send_server": true, 00:04:14.731 "enable_zerocopy_send_client": false, 00:04:14.731 "zerocopy_threshold": 0, 00:04:14.731 "tls_version": 0, 00:04:14.731 "enable_ktls": false 00:04:14.731 } 00:04:14.731 }, 00:04:14.731 { 00:04:14.731 "method": "sock_impl_set_options", 00:04:14.731 "params": { 00:04:14.731 "impl_name": "posix", 00:04:14.731 "recv_buf_size": 2097152, 00:04:14.731 "send_buf_size": 2097152, 00:04:14.731 "enable_recv_pipe": true, 00:04:14.731 "enable_quickack": false, 00:04:14.731 "enable_placement_id": 0, 00:04:14.731 "enable_zerocopy_send_server": true, 00:04:14.731 "enable_zerocopy_send_client": false, 00:04:14.732 "zerocopy_threshold": 0, 00:04:14.732 "tls_version": 0, 00:04:14.732 "enable_ktls": false 00:04:14.732 } 00:04:14.732 }, 00:04:14.732 { 00:04:14.732 "method": "sock_impl_set_options", 00:04:14.732 "params": { 00:04:14.732 "impl_name": "uring", 00:04:14.732 "recv_buf_size": 2097152, 00:04:14.732 "send_buf_size": 2097152, 00:04:14.732 "enable_recv_pipe": true, 00:04:14.732 "enable_quickack": false, 00:04:14.732 "enable_placement_id": 0, 00:04:14.732 "enable_zerocopy_send_server": false, 00:04:14.732 "enable_zerocopy_send_client": false, 00:04:14.732 "zerocopy_threshold": 0, 00:04:14.732 "tls_version": 0, 00:04:14.732 "enable_ktls": false 00:04:14.732 } 00:04:14.732 } 00:04:14.732 ] 00:04:14.732 }, 00:04:14.732 { 00:04:14.732 "subsystem": "vmd", 00:04:14.732 "config": [] 00:04:14.732 }, 00:04:14.732 { 00:04:14.732 "subsystem": "accel", 00:04:14.732 "config": [ 00:04:14.732 { 00:04:14.732 "method": "accel_set_options", 00:04:14.732 "params": { 00:04:14.732 "small_cache_size": 128, 00:04:14.732 "large_cache_size": 16, 00:04:14.732 "task_count": 2048, 00:04:14.732 "sequence_count": 2048, 00:04:14.732 "buf_count": 2048 00:04:14.732 } 00:04:14.732 } 00:04:14.732 ] 00:04:14.732 }, 00:04:14.732 { 00:04:14.732 "subsystem": "bdev", 00:04:14.732 "config": [ 00:04:14.732 { 00:04:14.732 "method": "bdev_set_options", 00:04:14.732 "params": { 00:04:14.732 "bdev_io_pool_size": 65535, 00:04:14.732 "bdev_io_cache_size": 256, 00:04:14.732 "bdev_auto_examine": true, 00:04:14.732 "iobuf_small_cache_size": 128, 00:04:14.732 "iobuf_large_cache_size": 16 00:04:14.732 } 00:04:14.732 }, 00:04:14.732 { 00:04:14.732 "method": "bdev_raid_set_options", 00:04:14.732 "params": { 00:04:14.732 "process_window_size_kb": 1024, 00:04:14.732 "process_max_bandwidth_mb_sec": 0 00:04:14.732 } 00:04:14.732 }, 00:04:14.732 { 00:04:14.732 "method": "bdev_iscsi_set_options", 00:04:14.732 "params": { 00:04:14.732 "timeout_sec": 30 00:04:14.732 } 00:04:14.732 }, 00:04:14.732 { 00:04:14.732 "method": "bdev_nvme_set_options", 00:04:14.732 "params": { 00:04:14.732 "action_on_timeout": "none", 00:04:14.732 "timeout_us": 0, 00:04:14.732 "timeout_admin_us": 0, 00:04:14.732 "keep_alive_timeout_ms": 10000, 00:04:14.732 "arbitration_burst": 0, 00:04:14.732 "low_priority_weight": 0, 00:04:14.732 "medium_priority_weight": 0, 00:04:14.732 "high_priority_weight": 0, 00:04:14.732 "nvme_adminq_poll_period_us": 10000, 00:04:14.732 "nvme_ioq_poll_period_us": 0, 00:04:14.732 "io_queue_requests": 0, 00:04:14.732 "delay_cmd_submit": true, 00:04:14.732 "transport_retry_count": 4, 00:04:14.732 "bdev_retry_count": 3, 00:04:14.732 "transport_ack_timeout": 0, 00:04:14.732 "ctrlr_loss_timeout_sec": 0, 00:04:14.732 "reconnect_delay_sec": 0, 00:04:14.732 "fast_io_fail_timeout_sec": 0, 00:04:14.732 "disable_auto_failback": false, 00:04:14.732 "generate_uuids": false, 00:04:14.732 "transport_tos": 0, 00:04:14.732 "nvme_error_stat": false, 00:04:14.732 "rdma_srq_size": 0, 00:04:14.732 "io_path_stat": false, 00:04:14.732 "allow_accel_sequence": false, 00:04:14.732 "rdma_max_cq_size": 0, 00:04:14.732 "rdma_cm_event_timeout_ms": 0, 00:04:14.732 "dhchap_digests": [ 00:04:14.732 "sha256", 00:04:14.732 "sha384", 00:04:14.732 "sha512" 00:04:14.732 ], 00:04:14.732 "dhchap_dhgroups": [ 00:04:14.732 "null", 00:04:14.732 "ffdhe2048", 00:04:14.732 "ffdhe3072", 00:04:14.732 "ffdhe4096", 00:04:14.732 "ffdhe6144", 00:04:14.732 "ffdhe8192" 00:04:14.732 ] 00:04:14.732 } 00:04:14.732 }, 00:04:14.732 { 00:04:14.732 "method": "bdev_nvme_set_hotplug", 00:04:14.732 "params": { 00:04:14.732 "period_us": 100000, 00:04:14.732 "enable": false 00:04:14.732 } 00:04:14.732 }, 00:04:14.732 { 00:04:14.732 "method": "bdev_wait_for_examine" 00:04:14.732 } 00:04:14.732 ] 00:04:14.732 }, 00:04:14.732 { 00:04:14.732 "subsystem": "scsi", 00:04:14.732 "config": null 00:04:14.732 }, 00:04:14.732 { 00:04:14.732 "subsystem": "scheduler", 00:04:14.732 "config": [ 00:04:14.732 { 00:04:14.732 "method": "framework_set_scheduler", 00:04:14.732 "params": { 00:04:14.732 "name": "static" 00:04:14.732 } 00:04:14.732 } 00:04:14.732 ] 00:04:14.732 }, 00:04:14.732 { 00:04:14.732 "subsystem": "vhost_scsi", 00:04:14.732 "config": [] 00:04:14.732 }, 00:04:14.732 { 00:04:14.732 "subsystem": "vhost_blk", 00:04:14.732 "config": [] 00:04:14.732 }, 00:04:14.732 { 00:04:14.732 "subsystem": "ublk", 00:04:14.732 "config": [] 00:04:14.732 }, 00:04:14.732 { 00:04:14.732 "subsystem": "nbd", 00:04:14.732 "config": [] 00:04:14.732 }, 00:04:14.732 { 00:04:14.732 "subsystem": "nvmf", 00:04:14.732 "config": [ 00:04:14.732 { 00:04:14.732 "method": "nvmf_set_config", 00:04:14.732 "params": { 00:04:14.732 "discovery_filter": "match_any", 00:04:14.732 "admin_cmd_passthru": { 00:04:14.732 "identify_ctrlr": false 00:04:14.732 }, 00:04:14.732 "dhchap_digests": [ 00:04:14.732 "sha256", 00:04:14.732 "sha384", 00:04:14.732 "sha512" 00:04:14.732 ], 00:04:14.732 "dhchap_dhgroups": [ 00:04:14.732 "null", 00:04:14.732 "ffdhe2048", 00:04:14.732 "ffdhe3072", 00:04:14.732 "ffdhe4096", 00:04:14.732 "ffdhe6144", 00:04:14.732 "ffdhe8192" 00:04:14.732 ] 00:04:14.732 } 00:04:14.732 }, 00:04:14.732 { 00:04:14.732 "method": "nvmf_set_max_subsystems", 00:04:14.732 "params": { 00:04:14.732 "max_subsystems": 1024 00:04:14.732 } 00:04:14.732 }, 00:04:14.732 { 00:04:14.732 "method": "nvmf_set_crdt", 00:04:14.732 "params": { 00:04:14.732 "crdt1": 0, 00:04:14.732 "crdt2": 0, 00:04:14.732 "crdt3": 0 00:04:14.732 } 00:04:14.732 }, 00:04:14.732 { 00:04:14.732 "method": "nvmf_create_transport", 00:04:14.732 "params": { 00:04:14.732 "trtype": "TCP", 00:04:14.732 "max_queue_depth": 128, 00:04:14.732 "max_io_qpairs_per_ctrlr": 127, 00:04:14.732 "in_capsule_data_size": 4096, 00:04:14.732 "max_io_size": 131072, 00:04:14.732 "io_unit_size": 131072, 00:04:14.732 "max_aq_depth": 128, 00:04:14.732 "num_shared_buffers": 511, 00:04:14.732 "buf_cache_size": 4294967295, 00:04:14.732 "dif_insert_or_strip": false, 00:04:14.732 "zcopy": false, 00:04:14.732 "c2h_success": true, 00:04:14.732 "sock_priority": 0, 00:04:14.732 "abort_timeout_sec": 1, 00:04:14.732 "ack_timeout": 0, 00:04:14.732 "data_wr_pool_size": 0 00:04:14.732 } 00:04:14.732 } 00:04:14.732 ] 00:04:14.732 }, 00:04:14.732 { 00:04:14.732 "subsystem": "iscsi", 00:04:14.732 "config": [ 00:04:14.732 { 00:04:14.732 "method": "iscsi_set_options", 00:04:14.732 "params": { 00:04:14.732 "node_base": "iqn.2016-06.io.spdk", 00:04:14.732 "max_sessions": 128, 00:04:14.732 "max_connections_per_session": 2, 00:04:14.732 "max_queue_depth": 64, 00:04:14.732 "default_time2wait": 2, 00:04:14.732 "default_time2retain": 20, 00:04:14.732 "first_burst_length": 8192, 00:04:14.732 "immediate_data": true, 00:04:14.732 "allow_duplicated_isid": false, 00:04:14.732 "error_recovery_level": 0, 00:04:14.732 "nop_timeout": 60, 00:04:14.732 "nop_in_interval": 30, 00:04:14.732 "disable_chap": false, 00:04:14.732 "require_chap": false, 00:04:14.732 "mutual_chap": false, 00:04:14.732 "chap_group": 0, 00:04:14.732 "max_large_datain_per_connection": 64, 00:04:14.732 "max_r2t_per_connection": 4, 00:04:14.732 "pdu_pool_size": 36864, 00:04:14.732 "immediate_data_pool_size": 16384, 00:04:14.732 "data_out_pool_size": 2048 00:04:14.732 } 00:04:14.732 } 00:04:14.732 ] 00:04:14.732 } 00:04:14.732 ] 00:04:14.732 } 00:04:14.732 13:43:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:14.732 13:43:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 56969 00:04:14.732 13:43:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 56969 ']' 00:04:14.732 13:43:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 56969 00:04:14.732 13:43:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:14.732 13:43:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:14.732 13:43:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56969 00:04:14.732 killing process with pid 56969 00:04:14.732 13:43:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:14.732 13:43:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:14.732 13:43:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56969' 00:04:14.732 13:43:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 56969 00:04:14.732 13:43:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 56969 00:04:15.300 13:43:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:15.300 13:43:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=56997 00:04:15.300 13:43:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:20.567 13:43:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 56997 00:04:20.567 13:43:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 56997 ']' 00:04:20.567 13:43:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 56997 00:04:20.567 13:43:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:20.567 13:43:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:20.567 13:43:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56997 00:04:20.567 killing process with pid 56997 00:04:20.567 13:43:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:20.567 13:43:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:20.567 13:43:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56997' 00:04:20.567 13:43:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 56997 00:04:20.567 13:43:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 56997 00:04:20.826 13:43:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:20.826 13:43:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:20.826 ************************************ 00:04:20.826 END TEST skip_rpc_with_json 00:04:20.826 ************************************ 00:04:20.826 00:04:20.826 real 0m7.240s 00:04:20.826 user 0m6.799s 00:04:20.826 sys 0m0.806s 00:04:20.826 13:43:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:20.826 13:43:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:20.827 13:43:20 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:20.827 13:43:20 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:20.827 13:43:20 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:20.827 13:43:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:20.827 ************************************ 00:04:20.827 START TEST skip_rpc_with_delay 00:04:20.827 ************************************ 00:04:20.827 13:43:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:20.827 13:43:20 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:20.827 13:43:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:20.827 13:43:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:20.827 13:43:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:20.827 13:43:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:20.827 13:43:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:20.827 13:43:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:20.827 13:43:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:20.827 13:43:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:20.827 13:43:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:20.827 13:43:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:20.827 13:43:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:20.827 [2024-12-06 13:43:20.158683] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:20.827 13:43:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:20.827 13:43:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:20.827 13:43:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:20.827 13:43:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:20.827 00:04:20.827 real 0m0.098s 00:04:20.827 user 0m0.057s 00:04:20.827 sys 0m0.039s 00:04:20.827 13:43:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:20.827 ************************************ 00:04:20.827 END TEST skip_rpc_with_delay 00:04:20.827 ************************************ 00:04:20.827 13:43:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:20.827 13:43:20 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:21.086 13:43:20 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:21.086 13:43:20 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:21.086 13:43:20 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:21.086 13:43:20 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:21.086 13:43:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:21.086 ************************************ 00:04:21.086 START TEST exit_on_failed_rpc_init 00:04:21.086 ************************************ 00:04:21.086 13:43:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:21.086 13:43:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57106 00:04:21.086 13:43:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:21.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:21.086 13:43:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57106 00:04:21.086 13:43:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57106 ']' 00:04:21.086 13:43:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:21.086 13:43:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:21.086 13:43:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:21.086 13:43:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:21.086 13:43:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:21.086 [2024-12-06 13:43:20.303495] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:04:21.086 [2024-12-06 13:43:20.303592] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57106 ] 00:04:21.086 [2024-12-06 13:43:20.448025] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:21.346 [2024-12-06 13:43:20.493514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:21.346 [2024-12-06 13:43:20.581297] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:21.605 13:43:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:21.605 13:43:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:21.605 13:43:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:21.606 13:43:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:21.606 13:43:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:21.606 13:43:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:21.606 13:43:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:21.606 13:43:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:21.606 13:43:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:21.606 13:43:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:21.606 13:43:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:21.606 13:43:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:21.606 13:43:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:21.606 13:43:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:21.606 13:43:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:21.606 [2024-12-06 13:43:20.890706] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:04:21.606 [2024-12-06 13:43:20.890806] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57117 ] 00:04:21.865 [2024-12-06 13:43:21.044293] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:21.865 [2024-12-06 13:43:21.094046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:21.865 [2024-12-06 13:43:21.094467] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:21.865 [2024-12-06 13:43:21.094506] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:21.865 [2024-12-06 13:43:21.094520] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:21.865 13:43:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:21.865 13:43:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:21.865 13:43:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:21.865 13:43:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:21.865 13:43:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:21.865 13:43:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:21.865 13:43:21 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:21.865 13:43:21 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57106 00:04:21.865 13:43:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57106 ']' 00:04:21.865 13:43:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57106 00:04:21.865 13:43:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:21.865 13:43:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:21.865 13:43:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57106 00:04:21.865 killing process with pid 57106 00:04:21.865 13:43:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:21.865 13:43:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:21.865 13:43:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57106' 00:04:21.865 13:43:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57106 00:04:21.865 13:43:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57106 00:04:22.450 00:04:22.450 real 0m1.440s 00:04:22.450 user 0m1.443s 00:04:22.450 sys 0m0.444s 00:04:22.450 ************************************ 00:04:22.450 END TEST exit_on_failed_rpc_init 00:04:22.450 ************************************ 00:04:22.450 13:43:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:22.450 13:43:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:22.450 13:43:21 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:22.450 00:04:22.450 real 0m14.736s 00:04:22.450 user 0m13.600s 00:04:22.450 sys 0m1.852s 00:04:22.450 ************************************ 00:04:22.450 END TEST skip_rpc 00:04:22.450 ************************************ 00:04:22.450 13:43:21 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:22.450 13:43:21 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:22.450 13:43:21 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:22.450 13:43:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:22.450 13:43:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:22.450 13:43:21 -- common/autotest_common.sh@10 -- # set +x 00:04:22.450 ************************************ 00:04:22.450 START TEST rpc_client 00:04:22.450 ************************************ 00:04:22.450 13:43:21 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:22.450 * Looking for test storage... 00:04:22.709 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:22.709 13:43:21 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:22.709 13:43:21 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:04:22.709 13:43:21 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:22.709 13:43:21 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:22.709 13:43:21 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:22.709 13:43:21 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:22.709 13:43:21 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:22.709 13:43:21 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:22.709 13:43:21 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:22.709 13:43:21 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:22.709 13:43:21 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:22.709 13:43:21 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:22.709 13:43:21 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:22.709 13:43:21 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:22.709 13:43:21 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:22.709 13:43:21 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:22.709 13:43:21 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:22.709 13:43:21 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:22.709 13:43:21 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:22.710 13:43:21 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:22.710 13:43:21 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:22.710 13:43:21 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:22.710 13:43:21 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:22.710 13:43:21 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:22.710 13:43:21 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:22.710 13:43:21 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:22.710 13:43:21 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:22.710 13:43:21 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:22.710 13:43:21 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:22.710 13:43:21 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:22.710 13:43:21 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:22.710 13:43:21 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:22.710 13:43:21 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:22.710 13:43:21 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:22.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.710 --rc genhtml_branch_coverage=1 00:04:22.710 --rc genhtml_function_coverage=1 00:04:22.710 --rc genhtml_legend=1 00:04:22.710 --rc geninfo_all_blocks=1 00:04:22.710 --rc geninfo_unexecuted_blocks=1 00:04:22.710 00:04:22.710 ' 00:04:22.710 13:43:21 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:22.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.710 --rc genhtml_branch_coverage=1 00:04:22.710 --rc genhtml_function_coverage=1 00:04:22.710 --rc genhtml_legend=1 00:04:22.710 --rc geninfo_all_blocks=1 00:04:22.710 --rc geninfo_unexecuted_blocks=1 00:04:22.710 00:04:22.710 ' 00:04:22.710 13:43:21 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:22.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.710 --rc genhtml_branch_coverage=1 00:04:22.710 --rc genhtml_function_coverage=1 00:04:22.710 --rc genhtml_legend=1 00:04:22.710 --rc geninfo_all_blocks=1 00:04:22.710 --rc geninfo_unexecuted_blocks=1 00:04:22.710 00:04:22.710 ' 00:04:22.710 13:43:21 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:22.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.710 --rc genhtml_branch_coverage=1 00:04:22.710 --rc genhtml_function_coverage=1 00:04:22.710 --rc genhtml_legend=1 00:04:22.710 --rc geninfo_all_blocks=1 00:04:22.710 --rc geninfo_unexecuted_blocks=1 00:04:22.710 00:04:22.710 ' 00:04:22.710 13:43:21 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:22.710 OK 00:04:22.710 13:43:21 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:22.710 00:04:22.710 real 0m0.206s 00:04:22.710 user 0m0.125s 00:04:22.710 sys 0m0.088s 00:04:22.710 ************************************ 00:04:22.710 END TEST rpc_client 00:04:22.710 ************************************ 00:04:22.710 13:43:21 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:22.710 13:43:21 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:22.710 13:43:22 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:22.710 13:43:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:22.710 13:43:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:22.710 13:43:22 -- common/autotest_common.sh@10 -- # set +x 00:04:22.710 ************************************ 00:04:22.710 START TEST json_config 00:04:22.710 ************************************ 00:04:22.710 13:43:22 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:22.710 13:43:22 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:22.710 13:43:22 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:04:22.710 13:43:22 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:22.969 13:43:22 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:22.969 13:43:22 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:22.969 13:43:22 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:22.969 13:43:22 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:22.969 13:43:22 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:22.969 13:43:22 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:22.969 13:43:22 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:22.969 13:43:22 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:22.969 13:43:22 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:22.969 13:43:22 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:22.969 13:43:22 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:22.969 13:43:22 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:22.969 13:43:22 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:22.969 13:43:22 json_config -- scripts/common.sh@345 -- # : 1 00:04:22.969 13:43:22 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:22.969 13:43:22 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:22.969 13:43:22 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:22.969 13:43:22 json_config -- scripts/common.sh@353 -- # local d=1 00:04:22.969 13:43:22 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:22.969 13:43:22 json_config -- scripts/common.sh@355 -- # echo 1 00:04:22.969 13:43:22 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:22.969 13:43:22 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:22.969 13:43:22 json_config -- scripts/common.sh@353 -- # local d=2 00:04:22.969 13:43:22 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:22.969 13:43:22 json_config -- scripts/common.sh@355 -- # echo 2 00:04:22.969 13:43:22 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:22.969 13:43:22 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:22.969 13:43:22 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:22.969 13:43:22 json_config -- scripts/common.sh@368 -- # return 0 00:04:22.969 13:43:22 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:22.969 13:43:22 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:22.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.969 --rc genhtml_branch_coverage=1 00:04:22.969 --rc genhtml_function_coverage=1 00:04:22.969 --rc genhtml_legend=1 00:04:22.969 --rc geninfo_all_blocks=1 00:04:22.969 --rc geninfo_unexecuted_blocks=1 00:04:22.969 00:04:22.969 ' 00:04:22.969 13:43:22 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:22.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.969 --rc genhtml_branch_coverage=1 00:04:22.969 --rc genhtml_function_coverage=1 00:04:22.969 --rc genhtml_legend=1 00:04:22.969 --rc geninfo_all_blocks=1 00:04:22.969 --rc geninfo_unexecuted_blocks=1 00:04:22.969 00:04:22.969 ' 00:04:22.969 13:43:22 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:22.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.969 --rc genhtml_branch_coverage=1 00:04:22.969 --rc genhtml_function_coverage=1 00:04:22.969 --rc genhtml_legend=1 00:04:22.969 --rc geninfo_all_blocks=1 00:04:22.969 --rc geninfo_unexecuted_blocks=1 00:04:22.969 00:04:22.969 ' 00:04:22.969 13:43:22 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:22.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.969 --rc genhtml_branch_coverage=1 00:04:22.969 --rc genhtml_function_coverage=1 00:04:22.969 --rc genhtml_legend=1 00:04:22.969 --rc geninfo_all_blocks=1 00:04:22.969 --rc geninfo_unexecuted_blocks=1 00:04:22.969 00:04:22.969 ' 00:04:22.969 13:43:22 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:22.969 13:43:22 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:22.969 13:43:22 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:22.969 13:43:22 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:22.969 13:43:22 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:22.969 13:43:22 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:22.969 13:43:22 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:22.969 13:43:22 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:22.969 13:43:22 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:22.969 13:43:22 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:22.969 13:43:22 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:22.969 13:43:22 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:22.969 13:43:22 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:04:22.969 13:43:22 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=cfa2def7-c8af-457f-82a0-b312efdea7f4 00:04:22.969 13:43:22 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:22.969 13:43:22 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:22.969 13:43:22 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:22.969 13:43:22 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:22.969 13:43:22 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:22.969 13:43:22 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:22.969 13:43:22 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:22.969 13:43:22 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:22.969 13:43:22 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:22.969 13:43:22 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:22.970 13:43:22 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:22.970 13:43:22 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:22.970 13:43:22 json_config -- paths/export.sh@5 -- # export PATH 00:04:22.970 13:43:22 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:22.970 13:43:22 json_config -- nvmf/common.sh@51 -- # : 0 00:04:22.970 13:43:22 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:22.970 13:43:22 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:22.970 13:43:22 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:22.970 13:43:22 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:22.970 13:43:22 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:22.970 13:43:22 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:22.970 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:22.970 13:43:22 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:22.970 13:43:22 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:22.970 13:43:22 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:22.970 13:43:22 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:22.970 13:43:22 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:22.970 13:43:22 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:22.970 13:43:22 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:22.970 13:43:22 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:22.970 13:43:22 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:22.970 13:43:22 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:22.970 13:43:22 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:22.970 13:43:22 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:22.970 13:43:22 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:22.970 13:43:22 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:22.970 13:43:22 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:04:22.970 13:43:22 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:22.970 13:43:22 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:22.970 13:43:22 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:22.970 13:43:22 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:22.970 INFO: JSON configuration test init 00:04:22.970 13:43:22 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:22.970 13:43:22 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:22.970 13:43:22 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:22.970 13:43:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:22.970 13:43:22 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:22.970 13:43:22 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:22.970 13:43:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:22.970 13:43:22 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:22.970 13:43:22 json_config -- json_config/common.sh@9 -- # local app=target 00:04:22.970 13:43:22 json_config -- json_config/common.sh@10 -- # shift 00:04:22.970 13:43:22 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:22.970 13:43:22 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:22.970 13:43:22 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:22.970 13:43:22 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:22.970 13:43:22 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:22.970 13:43:22 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=57256 00:04:22.970 Waiting for target to run... 00:04:22.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:22.970 13:43:22 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:22.970 13:43:22 json_config -- json_config/common.sh@25 -- # waitforlisten 57256 /var/tmp/spdk_tgt.sock 00:04:22.970 13:43:22 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:22.970 13:43:22 json_config -- common/autotest_common.sh@835 -- # '[' -z 57256 ']' 00:04:22.970 13:43:22 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:22.970 13:43:22 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:22.970 13:43:22 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:22.970 13:43:22 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:22.970 13:43:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:22.970 [2024-12-06 13:43:22.315413] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:04:22.970 [2024-12-06 13:43:22.315736] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57256 ] 00:04:23.537 [2024-12-06 13:43:22.759233] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:23.537 [2024-12-06 13:43:22.816609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:24.104 00:04:24.104 13:43:23 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:24.104 13:43:23 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:24.104 13:43:23 json_config -- json_config/common.sh@26 -- # echo '' 00:04:24.104 13:43:23 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:24.104 13:43:23 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:24.104 13:43:23 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:24.104 13:43:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:24.104 13:43:23 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:24.104 13:43:23 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:24.104 13:43:23 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:24.104 13:43:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:24.104 13:43:23 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:24.104 13:43:23 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:24.104 13:43:23 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:24.362 [2024-12-06 13:43:23.665421] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:24.620 13:43:23 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:24.620 13:43:23 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:24.620 13:43:23 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:24.620 13:43:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:24.620 13:43:23 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:24.620 13:43:23 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:24.620 13:43:23 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:24.620 13:43:23 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:24.620 13:43:23 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:24.620 13:43:23 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:24.620 13:43:23 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:24.620 13:43:23 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:24.878 13:43:24 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:24.878 13:43:24 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:24.878 13:43:24 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:24.878 13:43:24 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:24.878 13:43:24 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:24.878 13:43:24 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:24.878 13:43:24 json_config -- json_config/json_config.sh@54 -- # sort 00:04:24.878 13:43:24 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:24.878 13:43:24 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:24.878 13:43:24 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:24.878 13:43:24 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:24.878 13:43:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:24.878 13:43:24 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:24.878 13:43:24 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:24.878 13:43:24 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:24.878 13:43:24 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:24.878 13:43:24 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:24.878 13:43:24 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:24.878 13:43:24 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:24.878 13:43:24 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:24.878 13:43:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:24.878 13:43:24 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:24.878 13:43:24 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:24.878 13:43:24 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:24.878 13:43:24 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:24.878 13:43:24 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:25.137 MallocForNvmf0 00:04:25.137 13:43:24 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:25.137 13:43:24 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:25.395 MallocForNvmf1 00:04:25.395 13:43:24 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:25.395 13:43:24 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:25.653 [2024-12-06 13:43:24.928252] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:25.653 13:43:24 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:25.653 13:43:24 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:25.923 13:43:25 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:25.923 13:43:25 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:26.180 13:43:25 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:26.180 13:43:25 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:26.438 13:43:25 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:26.438 13:43:25 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:26.438 [2024-12-06 13:43:25.812768] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:26.438 13:43:25 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:26.438 13:43:25 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:26.438 13:43:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:26.696 13:43:25 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:26.696 13:43:25 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:26.696 13:43:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:26.696 13:43:25 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:26.696 13:43:25 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:26.696 13:43:25 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:26.954 MallocBdevForConfigChangeCheck 00:04:26.954 13:43:26 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:26.954 13:43:26 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:26.954 13:43:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:26.954 13:43:26 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:26.954 13:43:26 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:27.521 INFO: shutting down applications... 00:04:27.521 13:43:26 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:27.521 13:43:26 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:27.521 13:43:26 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:27.521 13:43:26 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:27.521 13:43:26 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:27.789 Calling clear_iscsi_subsystem 00:04:27.789 Calling clear_nvmf_subsystem 00:04:27.789 Calling clear_nbd_subsystem 00:04:27.789 Calling clear_ublk_subsystem 00:04:27.789 Calling clear_vhost_blk_subsystem 00:04:27.789 Calling clear_vhost_scsi_subsystem 00:04:27.789 Calling clear_bdev_subsystem 00:04:27.789 13:43:27 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:04:27.789 13:43:27 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:27.789 13:43:27 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:27.789 13:43:27 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:27.789 13:43:27 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:27.789 13:43:27 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:04:28.098 13:43:27 json_config -- json_config/json_config.sh@352 -- # break 00:04:28.098 13:43:27 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:28.365 13:43:27 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:28.365 13:43:27 json_config -- json_config/common.sh@31 -- # local app=target 00:04:28.365 13:43:27 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:28.365 13:43:27 json_config -- json_config/common.sh@35 -- # [[ -n 57256 ]] 00:04:28.365 13:43:27 json_config -- json_config/common.sh@38 -- # kill -SIGINT 57256 00:04:28.365 13:43:27 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:28.365 13:43:27 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:28.365 13:43:27 json_config -- json_config/common.sh@41 -- # kill -0 57256 00:04:28.365 13:43:27 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:28.624 13:43:27 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:28.624 13:43:27 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:28.624 13:43:27 json_config -- json_config/common.sh@41 -- # kill -0 57256 00:04:28.624 13:43:27 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:28.624 13:43:27 json_config -- json_config/common.sh@43 -- # break 00:04:28.624 13:43:27 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:28.624 SPDK target shutdown done 00:04:28.624 INFO: relaunching applications... 00:04:28.624 13:43:27 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:28.624 13:43:27 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:28.624 13:43:27 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:28.624 13:43:27 json_config -- json_config/common.sh@9 -- # local app=target 00:04:28.624 13:43:27 json_config -- json_config/common.sh@10 -- # shift 00:04:28.624 13:43:27 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:28.624 13:43:27 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:28.624 13:43:27 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:28.624 13:43:27 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:28.624 13:43:27 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:28.624 13:43:27 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=57452 00:04:28.624 13:43:27 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:28.624 Waiting for target to run... 00:04:28.624 13:43:27 json_config -- json_config/common.sh@25 -- # waitforlisten 57452 /var/tmp/spdk_tgt.sock 00:04:28.624 13:43:27 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:28.624 13:43:27 json_config -- common/autotest_common.sh@835 -- # '[' -z 57452 ']' 00:04:28.624 13:43:27 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:28.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:28.624 13:43:27 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:28.624 13:43:27 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:28.624 13:43:27 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:28.624 13:43:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:28.883 [2024-12-06 13:43:28.056755] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:04:28.883 [2024-12-06 13:43:28.056859] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57452 ] 00:04:29.141 [2024-12-06 13:43:28.497227] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:29.400 [2024-12-06 13:43:28.553329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:29.400 [2024-12-06 13:43:28.691938] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:29.659 [2024-12-06 13:43:28.911550] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:29.659 [2024-12-06 13:43:28.943650] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:29.660 00:04:29.660 INFO: Checking if target configuration is the same... 00:04:29.660 13:43:29 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:29.660 13:43:29 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:29.660 13:43:29 json_config -- json_config/common.sh@26 -- # echo '' 00:04:29.660 13:43:29 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:29.660 13:43:29 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:29.660 13:43:29 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:29.660 13:43:29 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:29.660 13:43:29 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:29.660 + '[' 2 -ne 2 ']' 00:04:29.660 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:29.660 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:29.660 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:29.660 +++ basename /dev/fd/62 00:04:29.660 ++ mktemp /tmp/62.XXX 00:04:29.660 + tmp_file_1=/tmp/62.OMC 00:04:29.919 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:29.919 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:29.919 + tmp_file_2=/tmp/spdk_tgt_config.json.VJS 00:04:29.919 + ret=0 00:04:29.919 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:30.178 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:30.178 + diff -u /tmp/62.OMC /tmp/spdk_tgt_config.json.VJS 00:04:30.178 INFO: JSON config files are the same 00:04:30.179 + echo 'INFO: JSON config files are the same' 00:04:30.179 + rm /tmp/62.OMC /tmp/spdk_tgt_config.json.VJS 00:04:30.179 + exit 0 00:04:30.179 INFO: changing configuration and checking if this can be detected... 00:04:30.179 13:43:29 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:30.179 13:43:29 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:30.179 13:43:29 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:30.179 13:43:29 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:30.438 13:43:29 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:30.438 13:43:29 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:30.438 13:43:29 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:30.438 + '[' 2 -ne 2 ']' 00:04:30.438 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:30.438 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:30.438 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:30.438 +++ basename /dev/fd/62 00:04:30.438 ++ mktemp /tmp/62.XXX 00:04:30.438 + tmp_file_1=/tmp/62.yEg 00:04:30.438 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:30.438 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:30.438 + tmp_file_2=/tmp/spdk_tgt_config.json.vDB 00:04:30.438 + ret=0 00:04:30.438 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:31.007 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:31.007 + diff -u /tmp/62.yEg /tmp/spdk_tgt_config.json.vDB 00:04:31.007 + ret=1 00:04:31.007 + echo '=== Start of file: /tmp/62.yEg ===' 00:04:31.007 + cat /tmp/62.yEg 00:04:31.007 + echo '=== End of file: /tmp/62.yEg ===' 00:04:31.007 + echo '' 00:04:31.007 + echo '=== Start of file: /tmp/spdk_tgt_config.json.vDB ===' 00:04:31.007 + cat /tmp/spdk_tgt_config.json.vDB 00:04:31.007 + echo '=== End of file: /tmp/spdk_tgt_config.json.vDB ===' 00:04:31.007 + echo '' 00:04:31.007 + rm /tmp/62.yEg /tmp/spdk_tgt_config.json.vDB 00:04:31.007 + exit 1 00:04:31.007 INFO: configuration change detected. 00:04:31.007 13:43:30 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:31.007 13:43:30 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:31.007 13:43:30 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:31.007 13:43:30 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:31.007 13:43:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:31.007 13:43:30 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:31.007 13:43:30 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:31.007 13:43:30 json_config -- json_config/json_config.sh@324 -- # [[ -n 57452 ]] 00:04:31.007 13:43:30 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:31.007 13:43:30 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:31.007 13:43:30 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:31.007 13:43:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:31.007 13:43:30 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:31.008 13:43:30 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:31.008 13:43:30 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:31.008 13:43:30 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:31.008 13:43:30 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:31.008 13:43:30 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:31.008 13:43:30 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:31.008 13:43:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:31.008 13:43:30 json_config -- json_config/json_config.sh@330 -- # killprocess 57452 00:04:31.008 13:43:30 json_config -- common/autotest_common.sh@954 -- # '[' -z 57452 ']' 00:04:31.008 13:43:30 json_config -- common/autotest_common.sh@958 -- # kill -0 57452 00:04:31.008 13:43:30 json_config -- common/autotest_common.sh@959 -- # uname 00:04:31.008 13:43:30 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:31.008 13:43:30 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57452 00:04:31.008 killing process with pid 57452 00:04:31.008 13:43:30 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:31.008 13:43:30 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:31.008 13:43:30 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57452' 00:04:31.008 13:43:30 json_config -- common/autotest_common.sh@973 -- # kill 57452 00:04:31.008 13:43:30 json_config -- common/autotest_common.sh@978 -- # wait 57452 00:04:31.266 13:43:30 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:31.266 13:43:30 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:31.266 13:43:30 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:31.266 13:43:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:31.525 13:43:30 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:31.525 INFO: Success 00:04:31.525 13:43:30 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:31.525 00:04:31.525 real 0m8.644s 00:04:31.525 user 0m12.434s 00:04:31.525 sys 0m1.752s 00:04:31.525 ************************************ 00:04:31.525 END TEST json_config 00:04:31.525 ************************************ 00:04:31.525 13:43:30 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:31.525 13:43:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:31.525 13:43:30 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:31.525 13:43:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:31.525 13:43:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:31.525 13:43:30 -- common/autotest_common.sh@10 -- # set +x 00:04:31.525 ************************************ 00:04:31.525 START TEST json_config_extra_key 00:04:31.525 ************************************ 00:04:31.525 13:43:30 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:31.525 13:43:30 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:31.525 13:43:30 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:04:31.525 13:43:30 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:31.525 13:43:30 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:31.525 13:43:30 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:31.525 13:43:30 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:31.525 13:43:30 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:31.525 13:43:30 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:31.526 13:43:30 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:31.526 13:43:30 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:31.526 13:43:30 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:31.526 13:43:30 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:31.526 13:43:30 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:31.526 13:43:30 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:31.526 13:43:30 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:31.526 13:43:30 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:31.526 13:43:30 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:31.526 13:43:30 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:31.526 13:43:30 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:31.526 13:43:30 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:31.526 13:43:30 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:31.526 13:43:30 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:31.526 13:43:30 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:31.526 13:43:30 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:31.526 13:43:30 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:31.526 13:43:30 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:31.526 13:43:30 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:31.526 13:43:30 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:31.526 13:43:30 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:31.526 13:43:30 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:31.526 13:43:30 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:31.526 13:43:30 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:31.526 13:43:30 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:31.526 13:43:30 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:31.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.526 --rc genhtml_branch_coverage=1 00:04:31.526 --rc genhtml_function_coverage=1 00:04:31.526 --rc genhtml_legend=1 00:04:31.526 --rc geninfo_all_blocks=1 00:04:31.526 --rc geninfo_unexecuted_blocks=1 00:04:31.526 00:04:31.526 ' 00:04:31.526 13:43:30 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:31.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.526 --rc genhtml_branch_coverage=1 00:04:31.526 --rc genhtml_function_coverage=1 00:04:31.526 --rc genhtml_legend=1 00:04:31.526 --rc geninfo_all_blocks=1 00:04:31.526 --rc geninfo_unexecuted_blocks=1 00:04:31.526 00:04:31.526 ' 00:04:31.526 13:43:30 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:31.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.526 --rc genhtml_branch_coverage=1 00:04:31.526 --rc genhtml_function_coverage=1 00:04:31.526 --rc genhtml_legend=1 00:04:31.526 --rc geninfo_all_blocks=1 00:04:31.526 --rc geninfo_unexecuted_blocks=1 00:04:31.526 00:04:31.526 ' 00:04:31.526 13:43:30 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:31.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.526 --rc genhtml_branch_coverage=1 00:04:31.526 --rc genhtml_function_coverage=1 00:04:31.526 --rc genhtml_legend=1 00:04:31.526 --rc geninfo_all_blocks=1 00:04:31.526 --rc geninfo_unexecuted_blocks=1 00:04:31.526 00:04:31.526 ' 00:04:31.526 13:43:30 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:31.526 13:43:30 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:31.526 13:43:30 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:31.526 13:43:30 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:31.526 13:43:30 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:31.526 13:43:30 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:31.526 13:43:30 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:31.526 13:43:30 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:31.526 13:43:30 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:31.526 13:43:30 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:31.526 13:43:30 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:31.526 13:43:30 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:31.526 13:43:30 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:04:31.526 13:43:30 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=cfa2def7-c8af-457f-82a0-b312efdea7f4 00:04:31.526 13:43:30 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:31.526 13:43:30 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:31.526 13:43:30 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:31.526 13:43:30 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:31.526 13:43:30 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:31.526 13:43:30 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:31.526 13:43:30 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:31.526 13:43:30 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:31.526 13:43:30 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:31.526 13:43:30 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:31.526 13:43:30 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:31.526 13:43:30 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:31.526 13:43:30 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:31.526 13:43:30 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:31.526 13:43:30 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:31.526 13:43:30 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:31.526 13:43:30 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:31.526 13:43:30 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:31.526 13:43:30 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:31.526 13:43:30 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:31.526 13:43:30 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:31.526 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:31.526 13:43:30 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:31.786 13:43:30 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:31.786 13:43:30 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:31.786 13:43:30 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:31.786 13:43:30 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:31.786 INFO: launching applications... 00:04:31.786 13:43:30 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:31.786 13:43:30 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:31.786 13:43:30 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:31.786 13:43:30 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:31.786 13:43:30 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:31.786 13:43:30 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:31.786 13:43:30 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:31.786 13:43:30 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:31.786 13:43:30 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:31.786 13:43:30 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:31.786 13:43:30 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:31.786 13:43:30 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:31.786 13:43:30 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:31.786 13:43:30 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:31.786 13:43:30 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:31.786 13:43:30 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:31.786 13:43:30 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:31.786 13:43:30 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57606 00:04:31.786 13:43:30 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:31.786 Waiting for target to run... 00:04:31.786 13:43:30 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57606 /var/tmp/spdk_tgt.sock 00:04:31.786 13:43:30 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:31.786 13:43:30 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57606 ']' 00:04:31.786 13:43:30 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:31.786 13:43:30 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:31.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:31.786 13:43:30 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:31.786 13:43:30 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:31.786 13:43:30 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:31.786 [2024-12-06 13:43:31.006971] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:04:31.786 [2024-12-06 13:43:31.007281] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57606 ] 00:04:32.354 [2024-12-06 13:43:31.461980] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:32.354 [2024-12-06 13:43:31.515874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:32.354 [2024-12-06 13:43:31.549374] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:32.614 00:04:32.614 INFO: shutting down applications... 00:04:32.614 13:43:31 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:32.614 13:43:31 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:32.614 13:43:31 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:32.614 13:43:31 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:32.614 13:43:31 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:32.614 13:43:31 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:32.614 13:43:31 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:32.614 13:43:31 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57606 ]] 00:04:32.614 13:43:31 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57606 00:04:32.614 13:43:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:32.614 13:43:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:32.614 13:43:31 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57606 00:04:32.614 13:43:31 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:33.184 13:43:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:33.184 13:43:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:33.184 13:43:32 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57606 00:04:33.184 13:43:32 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:33.751 13:43:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:33.751 13:43:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:33.751 13:43:32 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57606 00:04:33.751 13:43:32 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:33.751 13:43:32 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:33.751 13:43:32 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:33.751 13:43:32 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:33.751 SPDK target shutdown done 00:04:33.751 Success 00:04:33.751 13:43:32 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:33.751 00:04:33.751 real 0m2.176s 00:04:33.751 user 0m1.578s 00:04:33.751 sys 0m0.483s 00:04:33.751 ************************************ 00:04:33.751 END TEST json_config_extra_key 00:04:33.751 ************************************ 00:04:33.751 13:43:32 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:33.751 13:43:32 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:33.751 13:43:32 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:33.751 13:43:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:33.751 13:43:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:33.751 13:43:32 -- common/autotest_common.sh@10 -- # set +x 00:04:33.751 ************************************ 00:04:33.751 START TEST alias_rpc 00:04:33.751 ************************************ 00:04:33.751 13:43:32 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:33.751 * Looking for test storage... 00:04:33.751 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:33.751 13:43:33 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:33.751 13:43:33 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:33.751 13:43:33 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:33.751 13:43:33 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:33.751 13:43:33 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:33.751 13:43:33 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:33.751 13:43:33 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:33.751 13:43:33 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:33.751 13:43:33 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:33.751 13:43:33 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:33.751 13:43:33 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:33.751 13:43:33 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:33.751 13:43:33 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:33.752 13:43:33 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:33.752 13:43:33 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:33.752 13:43:33 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:33.752 13:43:33 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:33.752 13:43:33 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:33.752 13:43:33 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:33.752 13:43:33 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:34.011 13:43:33 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:34.011 13:43:33 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:34.011 13:43:33 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:34.011 13:43:33 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:34.011 13:43:33 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:34.011 13:43:33 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:34.011 13:43:33 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:34.011 13:43:33 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:34.011 13:43:33 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:34.011 13:43:33 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:34.011 13:43:33 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:34.011 13:43:33 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:34.011 13:43:33 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:34.011 13:43:33 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:34.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.011 --rc genhtml_branch_coverage=1 00:04:34.011 --rc genhtml_function_coverage=1 00:04:34.011 --rc genhtml_legend=1 00:04:34.011 --rc geninfo_all_blocks=1 00:04:34.011 --rc geninfo_unexecuted_blocks=1 00:04:34.011 00:04:34.011 ' 00:04:34.011 13:43:33 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:34.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.011 --rc genhtml_branch_coverage=1 00:04:34.011 --rc genhtml_function_coverage=1 00:04:34.011 --rc genhtml_legend=1 00:04:34.011 --rc geninfo_all_blocks=1 00:04:34.011 --rc geninfo_unexecuted_blocks=1 00:04:34.011 00:04:34.011 ' 00:04:34.011 13:43:33 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:34.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.011 --rc genhtml_branch_coverage=1 00:04:34.011 --rc genhtml_function_coverage=1 00:04:34.011 --rc genhtml_legend=1 00:04:34.011 --rc geninfo_all_blocks=1 00:04:34.011 --rc geninfo_unexecuted_blocks=1 00:04:34.011 00:04:34.011 ' 00:04:34.011 13:43:33 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:34.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.011 --rc genhtml_branch_coverage=1 00:04:34.011 --rc genhtml_function_coverage=1 00:04:34.011 --rc genhtml_legend=1 00:04:34.011 --rc geninfo_all_blocks=1 00:04:34.011 --rc geninfo_unexecuted_blocks=1 00:04:34.011 00:04:34.011 ' 00:04:34.011 13:43:33 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:34.011 13:43:33 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57685 00:04:34.011 13:43:33 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57685 00:04:34.011 13:43:33 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:34.011 13:43:33 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57685 ']' 00:04:34.011 13:43:33 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:34.011 13:43:33 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:34.011 13:43:33 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:34.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:34.011 13:43:33 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:34.011 13:43:33 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.011 [2024-12-06 13:43:33.235220] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:04:34.011 [2024-12-06 13:43:33.235479] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57685 ] 00:04:34.011 [2024-12-06 13:43:33.373824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:34.272 [2024-12-06 13:43:33.426316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.272 [2024-12-06 13:43:33.514207] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:34.530 13:43:33 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:34.530 13:43:33 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:34.530 13:43:33 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:34.789 13:43:34 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57685 00:04:34.789 13:43:34 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57685 ']' 00:04:34.789 13:43:34 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57685 00:04:34.789 13:43:34 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:34.789 13:43:34 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:34.789 13:43:34 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57685 00:04:34.789 killing process with pid 57685 00:04:34.789 13:43:34 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:34.789 13:43:34 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:34.789 13:43:34 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57685' 00:04:34.789 13:43:34 alias_rpc -- common/autotest_common.sh@973 -- # kill 57685 00:04:34.789 13:43:34 alias_rpc -- common/autotest_common.sh@978 -- # wait 57685 00:04:35.357 ************************************ 00:04:35.357 END TEST alias_rpc 00:04:35.357 ************************************ 00:04:35.357 00:04:35.357 real 0m1.622s 00:04:35.357 user 0m1.609s 00:04:35.357 sys 0m0.509s 00:04:35.357 13:43:34 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:35.357 13:43:34 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.357 13:43:34 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:35.357 13:43:34 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:35.357 13:43:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:35.357 13:43:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:35.357 13:43:34 -- common/autotest_common.sh@10 -- # set +x 00:04:35.357 ************************************ 00:04:35.357 START TEST spdkcli_tcp 00:04:35.357 ************************************ 00:04:35.357 13:43:34 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:35.357 * Looking for test storage... 00:04:35.357 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:35.357 13:43:34 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:35.357 13:43:34 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:04:35.357 13:43:34 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:35.616 13:43:34 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:35.616 13:43:34 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:35.616 13:43:34 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:35.616 13:43:34 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:35.616 13:43:34 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:35.616 13:43:34 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:35.616 13:43:34 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:35.616 13:43:34 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:35.616 13:43:34 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:35.616 13:43:34 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:35.616 13:43:34 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:35.616 13:43:34 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:35.616 13:43:34 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:35.616 13:43:34 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:35.616 13:43:34 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:35.616 13:43:34 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:35.617 13:43:34 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:35.617 13:43:34 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:35.617 13:43:34 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:35.617 13:43:34 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:35.617 13:43:34 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:35.617 13:43:34 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:35.617 13:43:34 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:35.617 13:43:34 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:35.617 13:43:34 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:35.617 13:43:34 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:35.617 13:43:34 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:35.617 13:43:34 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:35.617 13:43:34 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:35.617 13:43:34 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:35.617 13:43:34 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:35.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.617 --rc genhtml_branch_coverage=1 00:04:35.617 --rc genhtml_function_coverage=1 00:04:35.617 --rc genhtml_legend=1 00:04:35.617 --rc geninfo_all_blocks=1 00:04:35.617 --rc geninfo_unexecuted_blocks=1 00:04:35.617 00:04:35.617 ' 00:04:35.617 13:43:34 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:35.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.617 --rc genhtml_branch_coverage=1 00:04:35.617 --rc genhtml_function_coverage=1 00:04:35.617 --rc genhtml_legend=1 00:04:35.617 --rc geninfo_all_blocks=1 00:04:35.617 --rc geninfo_unexecuted_blocks=1 00:04:35.617 00:04:35.617 ' 00:04:35.617 13:43:34 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:35.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.617 --rc genhtml_branch_coverage=1 00:04:35.617 --rc genhtml_function_coverage=1 00:04:35.617 --rc genhtml_legend=1 00:04:35.617 --rc geninfo_all_blocks=1 00:04:35.617 --rc geninfo_unexecuted_blocks=1 00:04:35.617 00:04:35.617 ' 00:04:35.617 13:43:34 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:35.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.617 --rc genhtml_branch_coverage=1 00:04:35.617 --rc genhtml_function_coverage=1 00:04:35.617 --rc genhtml_legend=1 00:04:35.617 --rc geninfo_all_blocks=1 00:04:35.617 --rc geninfo_unexecuted_blocks=1 00:04:35.617 00:04:35.617 ' 00:04:35.617 13:43:34 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:35.617 13:43:34 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:35.617 13:43:34 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:35.617 13:43:34 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:35.617 13:43:34 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:35.617 13:43:34 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:35.617 13:43:34 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:35.617 13:43:34 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:35.617 13:43:34 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:35.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:35.617 13:43:34 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57766 00:04:35.617 13:43:34 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:35.617 13:43:34 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57766 00:04:35.617 13:43:34 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57766 ']' 00:04:35.617 13:43:34 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:35.617 13:43:34 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:35.617 13:43:34 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:35.617 13:43:34 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:35.617 13:43:34 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:35.617 [2024-12-06 13:43:34.879167] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:04:35.617 [2024-12-06 13:43:34.879262] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57766 ] 00:04:35.877 [2024-12-06 13:43:35.025387] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:35.877 [2024-12-06 13:43:35.071247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:35.877 [2024-12-06 13:43:35.071266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.877 [2024-12-06 13:43:35.157791] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:36.145 13:43:35 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:36.145 13:43:35 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:36.145 13:43:35 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57771 00:04:36.145 13:43:35 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:36.145 13:43:35 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:36.405 [ 00:04:36.405 "bdev_malloc_delete", 00:04:36.405 "bdev_malloc_create", 00:04:36.405 "bdev_null_resize", 00:04:36.405 "bdev_null_delete", 00:04:36.405 "bdev_null_create", 00:04:36.405 "bdev_nvme_cuse_unregister", 00:04:36.405 "bdev_nvme_cuse_register", 00:04:36.405 "bdev_opal_new_user", 00:04:36.405 "bdev_opal_set_lock_state", 00:04:36.405 "bdev_opal_delete", 00:04:36.405 "bdev_opal_get_info", 00:04:36.405 "bdev_opal_create", 00:04:36.405 "bdev_nvme_opal_revert", 00:04:36.405 "bdev_nvme_opal_init", 00:04:36.405 "bdev_nvme_send_cmd", 00:04:36.405 "bdev_nvme_set_keys", 00:04:36.405 "bdev_nvme_get_path_iostat", 00:04:36.405 "bdev_nvme_get_mdns_discovery_info", 00:04:36.405 "bdev_nvme_stop_mdns_discovery", 00:04:36.405 "bdev_nvme_start_mdns_discovery", 00:04:36.405 "bdev_nvme_set_multipath_policy", 00:04:36.405 "bdev_nvme_set_preferred_path", 00:04:36.405 "bdev_nvme_get_io_paths", 00:04:36.405 "bdev_nvme_remove_error_injection", 00:04:36.405 "bdev_nvme_add_error_injection", 00:04:36.405 "bdev_nvme_get_discovery_info", 00:04:36.405 "bdev_nvme_stop_discovery", 00:04:36.405 "bdev_nvme_start_discovery", 00:04:36.405 "bdev_nvme_get_controller_health_info", 00:04:36.405 "bdev_nvme_disable_controller", 00:04:36.405 "bdev_nvme_enable_controller", 00:04:36.405 "bdev_nvme_reset_controller", 00:04:36.405 "bdev_nvme_get_transport_statistics", 00:04:36.405 "bdev_nvme_apply_firmware", 00:04:36.405 "bdev_nvme_detach_controller", 00:04:36.405 "bdev_nvme_get_controllers", 00:04:36.405 "bdev_nvme_attach_controller", 00:04:36.405 "bdev_nvme_set_hotplug", 00:04:36.405 "bdev_nvme_set_options", 00:04:36.405 "bdev_passthru_delete", 00:04:36.405 "bdev_passthru_create", 00:04:36.405 "bdev_lvol_set_parent_bdev", 00:04:36.405 "bdev_lvol_set_parent", 00:04:36.405 "bdev_lvol_check_shallow_copy", 00:04:36.405 "bdev_lvol_start_shallow_copy", 00:04:36.405 "bdev_lvol_grow_lvstore", 00:04:36.405 "bdev_lvol_get_lvols", 00:04:36.405 "bdev_lvol_get_lvstores", 00:04:36.405 "bdev_lvol_delete", 00:04:36.405 "bdev_lvol_set_read_only", 00:04:36.405 "bdev_lvol_resize", 00:04:36.405 "bdev_lvol_decouple_parent", 00:04:36.405 "bdev_lvol_inflate", 00:04:36.405 "bdev_lvol_rename", 00:04:36.405 "bdev_lvol_clone_bdev", 00:04:36.405 "bdev_lvol_clone", 00:04:36.405 "bdev_lvol_snapshot", 00:04:36.405 "bdev_lvol_create", 00:04:36.405 "bdev_lvol_delete_lvstore", 00:04:36.405 "bdev_lvol_rename_lvstore", 00:04:36.405 "bdev_lvol_create_lvstore", 00:04:36.405 "bdev_raid_set_options", 00:04:36.405 "bdev_raid_remove_base_bdev", 00:04:36.405 "bdev_raid_add_base_bdev", 00:04:36.405 "bdev_raid_delete", 00:04:36.405 "bdev_raid_create", 00:04:36.405 "bdev_raid_get_bdevs", 00:04:36.405 "bdev_error_inject_error", 00:04:36.405 "bdev_error_delete", 00:04:36.405 "bdev_error_create", 00:04:36.405 "bdev_split_delete", 00:04:36.405 "bdev_split_create", 00:04:36.405 "bdev_delay_delete", 00:04:36.405 "bdev_delay_create", 00:04:36.405 "bdev_delay_update_latency", 00:04:36.405 "bdev_zone_block_delete", 00:04:36.405 "bdev_zone_block_create", 00:04:36.405 "blobfs_create", 00:04:36.405 "blobfs_detect", 00:04:36.405 "blobfs_set_cache_size", 00:04:36.405 "bdev_aio_delete", 00:04:36.405 "bdev_aio_rescan", 00:04:36.405 "bdev_aio_create", 00:04:36.405 "bdev_ftl_set_property", 00:04:36.405 "bdev_ftl_get_properties", 00:04:36.405 "bdev_ftl_get_stats", 00:04:36.405 "bdev_ftl_unmap", 00:04:36.405 "bdev_ftl_unload", 00:04:36.405 "bdev_ftl_delete", 00:04:36.405 "bdev_ftl_load", 00:04:36.405 "bdev_ftl_create", 00:04:36.405 "bdev_virtio_attach_controller", 00:04:36.405 "bdev_virtio_scsi_get_devices", 00:04:36.405 "bdev_virtio_detach_controller", 00:04:36.405 "bdev_virtio_blk_set_hotplug", 00:04:36.405 "bdev_iscsi_delete", 00:04:36.405 "bdev_iscsi_create", 00:04:36.405 "bdev_iscsi_set_options", 00:04:36.405 "bdev_uring_delete", 00:04:36.405 "bdev_uring_rescan", 00:04:36.405 "bdev_uring_create", 00:04:36.405 "accel_error_inject_error", 00:04:36.405 "ioat_scan_accel_module", 00:04:36.405 "dsa_scan_accel_module", 00:04:36.405 "iaa_scan_accel_module", 00:04:36.405 "keyring_file_remove_key", 00:04:36.405 "keyring_file_add_key", 00:04:36.405 "keyring_linux_set_options", 00:04:36.405 "fsdev_aio_delete", 00:04:36.405 "fsdev_aio_create", 00:04:36.405 "iscsi_get_histogram", 00:04:36.405 "iscsi_enable_histogram", 00:04:36.405 "iscsi_set_options", 00:04:36.405 "iscsi_get_auth_groups", 00:04:36.405 "iscsi_auth_group_remove_secret", 00:04:36.405 "iscsi_auth_group_add_secret", 00:04:36.405 "iscsi_delete_auth_group", 00:04:36.405 "iscsi_create_auth_group", 00:04:36.405 "iscsi_set_discovery_auth", 00:04:36.405 "iscsi_get_options", 00:04:36.405 "iscsi_target_node_request_logout", 00:04:36.405 "iscsi_target_node_set_redirect", 00:04:36.405 "iscsi_target_node_set_auth", 00:04:36.405 "iscsi_target_node_add_lun", 00:04:36.405 "iscsi_get_stats", 00:04:36.405 "iscsi_get_connections", 00:04:36.405 "iscsi_portal_group_set_auth", 00:04:36.405 "iscsi_start_portal_group", 00:04:36.405 "iscsi_delete_portal_group", 00:04:36.405 "iscsi_create_portal_group", 00:04:36.405 "iscsi_get_portal_groups", 00:04:36.405 "iscsi_delete_target_node", 00:04:36.405 "iscsi_target_node_remove_pg_ig_maps", 00:04:36.405 "iscsi_target_node_add_pg_ig_maps", 00:04:36.405 "iscsi_create_target_node", 00:04:36.405 "iscsi_get_target_nodes", 00:04:36.405 "iscsi_delete_initiator_group", 00:04:36.405 "iscsi_initiator_group_remove_initiators", 00:04:36.405 "iscsi_initiator_group_add_initiators", 00:04:36.405 "iscsi_create_initiator_group", 00:04:36.405 "iscsi_get_initiator_groups", 00:04:36.405 "nvmf_set_crdt", 00:04:36.405 "nvmf_set_config", 00:04:36.405 "nvmf_set_max_subsystems", 00:04:36.405 "nvmf_stop_mdns_prr", 00:04:36.405 "nvmf_publish_mdns_prr", 00:04:36.405 "nvmf_subsystem_get_listeners", 00:04:36.405 "nvmf_subsystem_get_qpairs", 00:04:36.405 "nvmf_subsystem_get_controllers", 00:04:36.405 "nvmf_get_stats", 00:04:36.405 "nvmf_get_transports", 00:04:36.405 "nvmf_create_transport", 00:04:36.405 "nvmf_get_targets", 00:04:36.405 "nvmf_delete_target", 00:04:36.405 "nvmf_create_target", 00:04:36.405 "nvmf_subsystem_allow_any_host", 00:04:36.405 "nvmf_subsystem_set_keys", 00:04:36.405 "nvmf_subsystem_remove_host", 00:04:36.405 "nvmf_subsystem_add_host", 00:04:36.405 "nvmf_ns_remove_host", 00:04:36.405 "nvmf_ns_add_host", 00:04:36.405 "nvmf_subsystem_remove_ns", 00:04:36.405 "nvmf_subsystem_set_ns_ana_group", 00:04:36.405 "nvmf_subsystem_add_ns", 00:04:36.405 "nvmf_subsystem_listener_set_ana_state", 00:04:36.405 "nvmf_discovery_get_referrals", 00:04:36.405 "nvmf_discovery_remove_referral", 00:04:36.405 "nvmf_discovery_add_referral", 00:04:36.405 "nvmf_subsystem_remove_listener", 00:04:36.405 "nvmf_subsystem_add_listener", 00:04:36.405 "nvmf_delete_subsystem", 00:04:36.405 "nvmf_create_subsystem", 00:04:36.405 "nvmf_get_subsystems", 00:04:36.405 "env_dpdk_get_mem_stats", 00:04:36.405 "nbd_get_disks", 00:04:36.405 "nbd_stop_disk", 00:04:36.405 "nbd_start_disk", 00:04:36.405 "ublk_recover_disk", 00:04:36.405 "ublk_get_disks", 00:04:36.405 "ublk_stop_disk", 00:04:36.405 "ublk_start_disk", 00:04:36.405 "ublk_destroy_target", 00:04:36.405 "ublk_create_target", 00:04:36.405 "virtio_blk_create_transport", 00:04:36.405 "virtio_blk_get_transports", 00:04:36.405 "vhost_controller_set_coalescing", 00:04:36.405 "vhost_get_controllers", 00:04:36.405 "vhost_delete_controller", 00:04:36.405 "vhost_create_blk_controller", 00:04:36.405 "vhost_scsi_controller_remove_target", 00:04:36.405 "vhost_scsi_controller_add_target", 00:04:36.405 "vhost_start_scsi_controller", 00:04:36.405 "vhost_create_scsi_controller", 00:04:36.405 "thread_set_cpumask", 00:04:36.405 "scheduler_set_options", 00:04:36.405 "framework_get_governor", 00:04:36.405 "framework_get_scheduler", 00:04:36.405 "framework_set_scheduler", 00:04:36.405 "framework_get_reactors", 00:04:36.405 "thread_get_io_channels", 00:04:36.405 "thread_get_pollers", 00:04:36.405 "thread_get_stats", 00:04:36.405 "framework_monitor_context_switch", 00:04:36.405 "spdk_kill_instance", 00:04:36.405 "log_enable_timestamps", 00:04:36.405 "log_get_flags", 00:04:36.405 "log_clear_flag", 00:04:36.405 "log_set_flag", 00:04:36.405 "log_get_level", 00:04:36.405 "log_set_level", 00:04:36.405 "log_get_print_level", 00:04:36.405 "log_set_print_level", 00:04:36.405 "framework_enable_cpumask_locks", 00:04:36.405 "framework_disable_cpumask_locks", 00:04:36.405 "framework_wait_init", 00:04:36.405 "framework_start_init", 00:04:36.405 "scsi_get_devices", 00:04:36.405 "bdev_get_histogram", 00:04:36.405 "bdev_enable_histogram", 00:04:36.405 "bdev_set_qos_limit", 00:04:36.405 "bdev_set_qd_sampling_period", 00:04:36.405 "bdev_get_bdevs", 00:04:36.405 "bdev_reset_iostat", 00:04:36.405 "bdev_get_iostat", 00:04:36.405 "bdev_examine", 00:04:36.405 "bdev_wait_for_examine", 00:04:36.405 "bdev_set_options", 00:04:36.405 "accel_get_stats", 00:04:36.405 "accel_set_options", 00:04:36.405 "accel_set_driver", 00:04:36.405 "accel_crypto_key_destroy", 00:04:36.406 "accel_crypto_keys_get", 00:04:36.406 "accel_crypto_key_create", 00:04:36.406 "accel_assign_opc", 00:04:36.406 "accel_get_module_info", 00:04:36.406 "accel_get_opc_assignments", 00:04:36.406 "vmd_rescan", 00:04:36.406 "vmd_remove_device", 00:04:36.406 "vmd_enable", 00:04:36.406 "sock_get_default_impl", 00:04:36.406 "sock_set_default_impl", 00:04:36.406 "sock_impl_set_options", 00:04:36.406 "sock_impl_get_options", 00:04:36.406 "iobuf_get_stats", 00:04:36.406 "iobuf_set_options", 00:04:36.406 "keyring_get_keys", 00:04:36.406 "framework_get_pci_devices", 00:04:36.406 "framework_get_config", 00:04:36.406 "framework_get_subsystems", 00:04:36.406 "fsdev_set_opts", 00:04:36.406 "fsdev_get_opts", 00:04:36.406 "trace_get_info", 00:04:36.406 "trace_get_tpoint_group_mask", 00:04:36.406 "trace_disable_tpoint_group", 00:04:36.406 "trace_enable_tpoint_group", 00:04:36.406 "trace_clear_tpoint_mask", 00:04:36.406 "trace_set_tpoint_mask", 00:04:36.406 "notify_get_notifications", 00:04:36.406 "notify_get_types", 00:04:36.406 "spdk_get_version", 00:04:36.406 "rpc_get_methods" 00:04:36.406 ] 00:04:36.406 13:43:35 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:36.406 13:43:35 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:36.406 13:43:35 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:36.406 13:43:35 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:36.406 13:43:35 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57766 00:04:36.406 13:43:35 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57766 ']' 00:04:36.406 13:43:35 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57766 00:04:36.406 13:43:35 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:36.406 13:43:35 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:36.406 13:43:35 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57766 00:04:36.406 killing process with pid 57766 00:04:36.406 13:43:35 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:36.406 13:43:35 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:36.406 13:43:35 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57766' 00:04:36.406 13:43:35 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57766 00:04:36.406 13:43:35 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57766 00:04:36.975 ************************************ 00:04:36.975 END TEST spdkcli_tcp 00:04:36.975 ************************************ 00:04:36.975 00:04:36.975 real 0m1.605s 00:04:36.975 user 0m2.733s 00:04:36.975 sys 0m0.540s 00:04:36.975 13:43:36 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:36.975 13:43:36 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:36.975 13:43:36 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:36.975 13:43:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:36.975 13:43:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:36.975 13:43:36 -- common/autotest_common.sh@10 -- # set +x 00:04:36.975 ************************************ 00:04:36.975 START TEST dpdk_mem_utility 00:04:36.975 ************************************ 00:04:36.975 13:43:36 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:36.975 * Looking for test storage... 00:04:37.234 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:37.234 13:43:36 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:37.234 13:43:36 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:04:37.234 13:43:36 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:37.234 13:43:36 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:37.234 13:43:36 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:37.234 13:43:36 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:37.234 13:43:36 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:37.234 13:43:36 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:37.234 13:43:36 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:37.234 13:43:36 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:37.234 13:43:36 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:37.234 13:43:36 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:37.234 13:43:36 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:37.234 13:43:36 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:37.234 13:43:36 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:37.234 13:43:36 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:37.234 13:43:36 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:37.234 13:43:36 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:37.234 13:43:36 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:37.234 13:43:36 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:37.234 13:43:36 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:37.234 13:43:36 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:37.234 13:43:36 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:37.234 13:43:36 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:37.234 13:43:36 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:37.234 13:43:36 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:37.234 13:43:36 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:37.234 13:43:36 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:37.234 13:43:36 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:37.234 13:43:36 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:37.234 13:43:36 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:37.234 13:43:36 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:37.234 13:43:36 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:37.234 13:43:36 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:37.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.234 --rc genhtml_branch_coverage=1 00:04:37.234 --rc genhtml_function_coverage=1 00:04:37.234 --rc genhtml_legend=1 00:04:37.234 --rc geninfo_all_blocks=1 00:04:37.234 --rc geninfo_unexecuted_blocks=1 00:04:37.234 00:04:37.234 ' 00:04:37.234 13:43:36 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:37.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.234 --rc genhtml_branch_coverage=1 00:04:37.234 --rc genhtml_function_coverage=1 00:04:37.234 --rc genhtml_legend=1 00:04:37.234 --rc geninfo_all_blocks=1 00:04:37.234 --rc geninfo_unexecuted_blocks=1 00:04:37.234 00:04:37.234 ' 00:04:37.234 13:43:36 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:37.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.234 --rc genhtml_branch_coverage=1 00:04:37.234 --rc genhtml_function_coverage=1 00:04:37.234 --rc genhtml_legend=1 00:04:37.234 --rc geninfo_all_blocks=1 00:04:37.234 --rc geninfo_unexecuted_blocks=1 00:04:37.234 00:04:37.234 ' 00:04:37.234 13:43:36 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:37.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.234 --rc genhtml_branch_coverage=1 00:04:37.234 --rc genhtml_function_coverage=1 00:04:37.234 --rc genhtml_legend=1 00:04:37.234 --rc geninfo_all_blocks=1 00:04:37.234 --rc geninfo_unexecuted_blocks=1 00:04:37.234 00:04:37.234 ' 00:04:37.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:37.235 13:43:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:37.235 13:43:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57857 00:04:37.235 13:43:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:37.235 13:43:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57857 00:04:37.235 13:43:36 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 57857 ']' 00:04:37.235 13:43:36 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:37.235 13:43:36 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:37.235 13:43:36 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:37.235 13:43:36 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:37.235 13:43:36 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:37.235 [2024-12-06 13:43:36.556264] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:04:37.235 [2024-12-06 13:43:36.556545] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57857 ] 00:04:37.494 [2024-12-06 13:43:36.701474] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:37.494 [2024-12-06 13:43:36.754479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.494 [2024-12-06 13:43:36.844795] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:37.754 13:43:37 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:37.754 13:43:37 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:37.754 13:43:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:37.754 13:43:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:37.754 13:43:37 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:37.754 13:43:37 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:37.754 { 00:04:37.754 "filename": "/tmp/spdk_mem_dump.txt" 00:04:37.754 } 00:04:37.754 13:43:37 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:37.754 13:43:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:38.015 DPDK memory size 818.000000 MiB in 1 heap(s) 00:04:38.015 1 heaps totaling size 818.000000 MiB 00:04:38.015 size: 818.000000 MiB heap id: 0 00:04:38.015 end heaps---------- 00:04:38.015 9 mempools totaling size 603.782043 MiB 00:04:38.015 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:38.015 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:38.015 size: 100.555481 MiB name: bdev_io_57857 00:04:38.015 size: 50.003479 MiB name: msgpool_57857 00:04:38.015 size: 36.509338 MiB name: fsdev_io_57857 00:04:38.015 size: 21.763794 MiB name: PDU_Pool 00:04:38.015 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:38.015 size: 4.133484 MiB name: evtpool_57857 00:04:38.015 size: 0.026123 MiB name: Session_Pool 00:04:38.015 end mempools------- 00:04:38.015 6 memzones totaling size 4.142822 MiB 00:04:38.015 size: 1.000366 MiB name: RG_ring_0_57857 00:04:38.015 size: 1.000366 MiB name: RG_ring_1_57857 00:04:38.015 size: 1.000366 MiB name: RG_ring_4_57857 00:04:38.015 size: 1.000366 MiB name: RG_ring_5_57857 00:04:38.015 size: 0.125366 MiB name: RG_ring_2_57857 00:04:38.015 size: 0.015991 MiB name: RG_ring_3_57857 00:04:38.015 end memzones------- 00:04:38.015 13:43:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:38.015 heap id: 0 total size: 818.000000 MiB number of busy elements: 317 number of free elements: 15 00:04:38.015 list of free elements. size: 10.802490 MiB 00:04:38.015 element at address: 0x200019200000 with size: 0.999878 MiB 00:04:38.015 element at address: 0x200019400000 with size: 0.999878 MiB 00:04:38.015 element at address: 0x200032000000 with size: 0.994446 MiB 00:04:38.015 element at address: 0x200000400000 with size: 0.993958 MiB 00:04:38.015 element at address: 0x200006400000 with size: 0.959839 MiB 00:04:38.015 element at address: 0x200012c00000 with size: 0.944275 MiB 00:04:38.015 element at address: 0x200019600000 with size: 0.936584 MiB 00:04:38.015 element at address: 0x200000200000 with size: 0.717346 MiB 00:04:38.015 element at address: 0x20001ae00000 with size: 0.567688 MiB 00:04:38.015 element at address: 0x20000a600000 with size: 0.488892 MiB 00:04:38.015 element at address: 0x200000c00000 with size: 0.486267 MiB 00:04:38.015 element at address: 0x200019800000 with size: 0.485657 MiB 00:04:38.015 element at address: 0x200003e00000 with size: 0.480286 MiB 00:04:38.015 element at address: 0x200028200000 with size: 0.395752 MiB 00:04:38.015 element at address: 0x200000800000 with size: 0.351746 MiB 00:04:38.015 list of standard malloc elements. size: 199.268616 MiB 00:04:38.015 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:04:38.015 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:04:38.015 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:38.015 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:04:38.015 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:04:38.015 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:38.015 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:04:38.015 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:38.015 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:04:38.015 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:38.015 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:38.015 element at address: 0x2000004fe740 with size: 0.000183 MiB 00:04:38.015 element at address: 0x2000004fe800 with size: 0.000183 MiB 00:04:38.015 element at address: 0x2000004fe8c0 with size: 0.000183 MiB 00:04:38.015 element at address: 0x2000004fe980 with size: 0.000183 MiB 00:04:38.015 element at address: 0x2000004fea40 with size: 0.000183 MiB 00:04:38.015 element at address: 0x2000004feb00 with size: 0.000183 MiB 00:04:38.015 element at address: 0x2000004febc0 with size: 0.000183 MiB 00:04:38.015 element at address: 0x2000004fec80 with size: 0.000183 MiB 00:04:38.015 element at address: 0x2000004fed40 with size: 0.000183 MiB 00:04:38.015 element at address: 0x2000004fee00 with size: 0.000183 MiB 00:04:38.015 element at address: 0x2000004feec0 with size: 0.000183 MiB 00:04:38.015 element at address: 0x2000004fef80 with size: 0.000183 MiB 00:04:38.015 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:04:38.015 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:04:38.015 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:04:38.015 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:04:38.015 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:04:38.015 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:04:38.015 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:04:38.015 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:04:38.015 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:04:38.015 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:04:38.016 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:04:38.016 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:04:38.016 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:04:38.016 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:04:38.016 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:04:38.016 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:04:38.016 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:04:38.016 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:04:38.016 element at address: 0x20000085a0c0 with size: 0.000183 MiB 00:04:38.016 element at address: 0x20000085a2c0 with size: 0.000183 MiB 00:04:38.016 element at address: 0x20000085e580 with size: 0.000183 MiB 00:04:38.016 element at address: 0x20000087e840 with size: 0.000183 MiB 00:04:38.016 element at address: 0x20000087e900 with size: 0.000183 MiB 00:04:38.016 element at address: 0x20000087e9c0 with size: 0.000183 MiB 00:04:38.016 element at address: 0x20000087ea80 with size: 0.000183 MiB 00:04:38.016 element at address: 0x20000087eb40 with size: 0.000183 MiB 00:04:38.016 element at address: 0x20000087ec00 with size: 0.000183 MiB 00:04:38.016 element at address: 0x20000087ecc0 with size: 0.000183 MiB 00:04:38.016 element at address: 0x20000087ed80 with size: 0.000183 MiB 00:04:38.016 element at address: 0x20000087ee40 with size: 0.000183 MiB 00:04:38.016 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:04:38.016 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:04:38.016 element at address: 0x20000087f080 with size: 0.000183 MiB 00:04:38.016 element at address: 0x20000087f140 with size: 0.000183 MiB 00:04:38.016 element at address: 0x20000087f200 with size: 0.000183 MiB 00:04:38.016 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:04:38.016 element at address: 0x20000087f380 with size: 0.000183 MiB 00:04:38.016 element at address: 0x20000087f440 with size: 0.000183 MiB 00:04:38.016 element at address: 0x20000087f500 with size: 0.000183 MiB 00:04:38.016 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:04:38.016 element at address: 0x20000087f680 with size: 0.000183 MiB 00:04:38.016 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:04:38.016 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:04:38.016 element at address: 0x200000c7c7c0 with size: 0.000183 MiB 00:04:38.016 element at address: 0x200000c7c880 with size: 0.000183 MiB 00:04:38.016 element at address: 0x200000c7c940 with size: 0.000183 MiB 00:04:38.016 element at address: 0x200000c7ca00 with size: 0.000183 MiB 00:04:38.016 element at address: 0x200000c7cac0 with size: 0.000183 MiB 00:04:38.016 element at address: 0x200000c7cb80 with size: 0.000183 MiB 00:04:38.016 element at address: 0x200000c7cc40 with size: 0.000183 MiB 00:04:38.016 element at address: 0x200000c7cd00 with size: 0.000183 MiB 00:04:38.016 element at address: 0x200000c7cdc0 with size: 0.000183 MiB 00:04:38.016 element at address: 0x200000c7ce80 with size: 0.000183 MiB 00:04:38.016 element at address: 0x200000c7cf40 with size: 0.000183 MiB 00:04:38.016 element at address: 0x200000c7d000 with size: 0.000183 MiB 00:04:38.016 element at address: 0x200000c7d0c0 with size: 0.000183 MiB 00:04:38.016 element at address: 0x200000c7d180 with size: 0.000183 MiB 00:04:38.016 element at address: 0x200000c7d240 with size: 0.000183 MiB 00:04:38.016 element at address: 0x200000c7d300 with size: 0.000183 MiB 00:04:38.016 element at address: 0x200000c7d3c0 with size: 0.000183 MiB 00:04:38.016 element at address: 0x200000c7d480 with size: 0.000183 MiB 00:04:38.016 element at address: 0x200000c7d540 with size: 0.000183 MiB 00:04:38.016 element at address: 0x200000c7d600 with size: 0.000183 MiB 00:04:38.016 element at address: 0x200000c7d6c0 with size: 0.000183 MiB 00:04:38.016 element at address: 0x200000c7d780 with size: 0.000183 MiB 00:04:38.016 element at address: 0x200000c7d840 with size: 0.000183 MiB 00:04:38.016 element at address: 0x200000c7d900 with size: 0.000183 MiB 00:04:38.016 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:04:38.016 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:04:38.016 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:04:38.016 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:04:38.016 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:04:38.016 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:04:38.016 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:04:38.016 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:04:38.016 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:04:38.016 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:04:38.016 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:04:38.016 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:04:38.016 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:04:38.016 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:04:38.016 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:04:38.016 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:04:38.016 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:04:38.016 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:04:38.016 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:04:38.016 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:04:38.016 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:04:38.016 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:04:38.016 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:04:38.016 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:04:38.016 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:04:38.016 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:04:38.016 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:04:38.016 element at address: 0x200000cff000 with size: 0.000183 MiB 00:04:38.016 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:04:38.016 element at address: 0x200003e7af40 with size: 0.000183 MiB 00:04:38.016 element at address: 0x200003e7b000 with size: 0.000183 MiB 00:04:38.016 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 00:04:38.016 element at address: 0x200003e7b180 with size: 0.000183 MiB 00:04:38.016 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:04:38.016 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:04:38.016 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:04:38.016 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:04:38.016 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:04:38.016 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:04:38.016 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:04:38.016 element at address: 0x200003efb980 with size: 0.000183 MiB 00:04:38.016 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:04:38.016 element at address: 0x20000a67d280 with size: 0.000183 MiB 00:04:38.016 element at address: 0x20000a67d340 with size: 0.000183 MiB 00:04:38.016 element at address: 0x20000a67d400 with size: 0.000183 MiB 00:04:38.016 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:04:38.016 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:04:38.016 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:04:38.016 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:04:38.016 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:04:38.016 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:04:38.016 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:04:38.016 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:04:38.016 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:04:38.016 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:04:38.016 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:04:38.016 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:04:38.016 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:04:38.016 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20001ae91540 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20001ae91600 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20001ae916c0 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20001ae91780 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20001ae91840 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20001ae91900 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20001ae919c0 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20001ae91a80 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20001ae91b40 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20001ae91c00 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20001ae91cc0 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20001ae91d80 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20001ae91e40 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20001ae91f00 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20001ae91fc0 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20001ae92080 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20001ae92140 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20001ae92200 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20001ae922c0 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20001ae92380 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20001ae92440 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20001ae92500 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20001ae925c0 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20001ae92680 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20001ae92740 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20001ae92800 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20001ae928c0 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20001ae92980 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20001ae92a40 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20001ae92b00 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20001ae92bc0 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20001ae92c80 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20001ae92d40 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20001ae92e00 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20001ae92ec0 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20001ae92f80 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20001ae93040 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20001ae93100 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20001ae931c0 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20001ae93280 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20001ae93340 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20001ae93400 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20001ae934c0 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20001ae93580 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20001ae93640 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20001ae93700 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20001ae937c0 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20001ae93880 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20001ae93940 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20001ae93a00 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20001ae93ac0 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20001ae93b80 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20001ae93c40 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20001ae93d00 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20001ae93dc0 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20001ae93e80 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20001ae93f40 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20001ae94000 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20001ae940c0 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20001ae94180 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20001ae94240 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20001ae94300 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20001ae943c0 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20001ae94480 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20001ae94540 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20001ae94600 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20001ae946c0 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20001ae94780 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20001ae94840 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20001ae94900 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20001ae949c0 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20001ae94a80 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20001ae94b40 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20001ae94c00 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20001ae94cc0 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20001ae94d80 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20001ae94e40 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20001ae94f00 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20001ae94fc0 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20001ae95080 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20001ae95140 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20001ae95200 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20001ae952c0 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:04:38.017 element at address: 0x200028265500 with size: 0.000183 MiB 00:04:38.017 element at address: 0x2000282655c0 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20002826c1c0 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20002826c3c0 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20002826c480 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20002826c540 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20002826c600 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20002826c6c0 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20002826c780 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20002826c840 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20002826c900 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20002826c9c0 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20002826ca80 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20002826cb40 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20002826cc00 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20002826ccc0 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20002826cd80 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20002826ce40 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20002826cf00 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20002826cfc0 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20002826d080 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20002826d140 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20002826d200 with size: 0.000183 MiB 00:04:38.017 element at address: 0x20002826d2c0 with size: 0.000183 MiB 00:04:38.018 element at address: 0x20002826d380 with size: 0.000183 MiB 00:04:38.018 element at address: 0x20002826d440 with size: 0.000183 MiB 00:04:38.018 element at address: 0x20002826d500 with size: 0.000183 MiB 00:04:38.018 element at address: 0x20002826d5c0 with size: 0.000183 MiB 00:04:38.018 element at address: 0x20002826d680 with size: 0.000183 MiB 00:04:38.018 element at address: 0x20002826d740 with size: 0.000183 MiB 00:04:38.018 element at address: 0x20002826d800 with size: 0.000183 MiB 00:04:38.018 element at address: 0x20002826d8c0 with size: 0.000183 MiB 00:04:38.018 element at address: 0x20002826d980 with size: 0.000183 MiB 00:04:38.018 element at address: 0x20002826da40 with size: 0.000183 MiB 00:04:38.018 element at address: 0x20002826db00 with size: 0.000183 MiB 00:04:38.018 element at address: 0x20002826dbc0 with size: 0.000183 MiB 00:04:38.018 element at address: 0x20002826dc80 with size: 0.000183 MiB 00:04:38.018 element at address: 0x20002826dd40 with size: 0.000183 MiB 00:04:38.018 element at address: 0x20002826de00 with size: 0.000183 MiB 00:04:38.018 element at address: 0x20002826dec0 with size: 0.000183 MiB 00:04:38.018 element at address: 0x20002826df80 with size: 0.000183 MiB 00:04:38.018 element at address: 0x20002826e040 with size: 0.000183 MiB 00:04:38.018 element at address: 0x20002826e100 with size: 0.000183 MiB 00:04:38.018 element at address: 0x20002826e1c0 with size: 0.000183 MiB 00:04:38.018 element at address: 0x20002826e280 with size: 0.000183 MiB 00:04:38.018 element at address: 0x20002826e340 with size: 0.000183 MiB 00:04:38.018 element at address: 0x20002826e400 with size: 0.000183 MiB 00:04:38.018 element at address: 0x20002826e4c0 with size: 0.000183 MiB 00:04:38.018 element at address: 0x20002826e580 with size: 0.000183 MiB 00:04:38.018 element at address: 0x20002826e640 with size: 0.000183 MiB 00:04:38.018 element at address: 0x20002826e700 with size: 0.000183 MiB 00:04:38.018 element at address: 0x20002826e7c0 with size: 0.000183 MiB 00:04:38.018 element at address: 0x20002826e880 with size: 0.000183 MiB 00:04:38.018 element at address: 0x20002826e940 with size: 0.000183 MiB 00:04:38.018 element at address: 0x20002826ea00 with size: 0.000183 MiB 00:04:38.018 element at address: 0x20002826eac0 with size: 0.000183 MiB 00:04:38.018 element at address: 0x20002826eb80 with size: 0.000183 MiB 00:04:38.018 element at address: 0x20002826ec40 with size: 0.000183 MiB 00:04:38.018 element at address: 0x20002826ed00 with size: 0.000183 MiB 00:04:38.018 element at address: 0x20002826edc0 with size: 0.000183 MiB 00:04:38.018 element at address: 0x20002826ee80 with size: 0.000183 MiB 00:04:38.018 element at address: 0x20002826ef40 with size: 0.000183 MiB 00:04:38.018 element at address: 0x20002826f000 with size: 0.000183 MiB 00:04:38.018 element at address: 0x20002826f0c0 with size: 0.000183 MiB 00:04:38.018 element at address: 0x20002826f180 with size: 0.000183 MiB 00:04:38.018 element at address: 0x20002826f240 with size: 0.000183 MiB 00:04:38.018 element at address: 0x20002826f300 with size: 0.000183 MiB 00:04:38.018 element at address: 0x20002826f3c0 with size: 0.000183 MiB 00:04:38.018 element at address: 0x20002826f480 with size: 0.000183 MiB 00:04:38.018 element at address: 0x20002826f540 with size: 0.000183 MiB 00:04:38.018 element at address: 0x20002826f600 with size: 0.000183 MiB 00:04:38.018 element at address: 0x20002826f6c0 with size: 0.000183 MiB 00:04:38.018 element at address: 0x20002826f780 with size: 0.000183 MiB 00:04:38.018 element at address: 0x20002826f840 with size: 0.000183 MiB 00:04:38.018 element at address: 0x20002826f900 with size: 0.000183 MiB 00:04:38.018 element at address: 0x20002826f9c0 with size: 0.000183 MiB 00:04:38.018 element at address: 0x20002826fa80 with size: 0.000183 MiB 00:04:38.018 element at address: 0x20002826fb40 with size: 0.000183 MiB 00:04:38.018 element at address: 0x20002826fc00 with size: 0.000183 MiB 00:04:38.018 element at address: 0x20002826fcc0 with size: 0.000183 MiB 00:04:38.018 element at address: 0x20002826fd80 with size: 0.000183 MiB 00:04:38.018 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:04:38.018 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:04:38.018 list of memzone associated elements. size: 607.928894 MiB 00:04:38.018 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:04:38.018 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:38.018 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:04:38.018 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:38.018 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:04:38.018 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_57857_0 00:04:38.018 element at address: 0x200000dff380 with size: 48.003052 MiB 00:04:38.018 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57857_0 00:04:38.018 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:04:38.018 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_57857_0 00:04:38.018 element at address: 0x2000199be940 with size: 20.255554 MiB 00:04:38.018 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:38.018 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:04:38.018 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:38.018 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:04:38.018 associated memzone info: size: 3.000122 MiB name: MP_evtpool_57857_0 00:04:38.018 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:04:38.018 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57857 00:04:38.018 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:38.018 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57857 00:04:38.018 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:04:38.018 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:38.018 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:04:38.018 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:38.018 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:04:38.018 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:38.018 element at address: 0x200003efba40 with size: 1.008118 MiB 00:04:38.018 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:38.018 element at address: 0x200000cff180 with size: 1.000488 MiB 00:04:38.018 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57857 00:04:38.018 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:04:38.018 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57857 00:04:38.018 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:04:38.018 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57857 00:04:38.018 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:04:38.018 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57857 00:04:38.018 element at address: 0x20000087f740 with size: 0.500488 MiB 00:04:38.018 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_57857 00:04:38.018 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:04:38.018 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57857 00:04:38.018 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:04:38.018 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:38.018 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:04:38.018 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:38.018 element at address: 0x20001987c540 with size: 0.250488 MiB 00:04:38.018 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:38.018 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:04:38.018 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_57857 00:04:38.019 element at address: 0x20000085e640 with size: 0.125488 MiB 00:04:38.019 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57857 00:04:38.019 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:04:38.019 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:38.019 element at address: 0x200028265680 with size: 0.023743 MiB 00:04:38.019 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:38.019 element at address: 0x20000085a380 with size: 0.016113 MiB 00:04:38.019 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57857 00:04:38.019 element at address: 0x20002826b7c0 with size: 0.002441 MiB 00:04:38.019 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:38.019 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:04:38.019 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57857 00:04:38.019 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:04:38.019 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_57857 00:04:38.019 element at address: 0x20000085a180 with size: 0.000305 MiB 00:04:38.019 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57857 00:04:38.019 element at address: 0x20002826c280 with size: 0.000305 MiB 00:04:38.019 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:38.019 13:43:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:38.019 13:43:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57857 00:04:38.019 13:43:37 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 57857 ']' 00:04:38.019 13:43:37 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 57857 00:04:38.019 13:43:37 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:38.019 13:43:37 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:38.019 13:43:37 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57857 00:04:38.019 killing process with pid 57857 00:04:38.019 13:43:37 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:38.019 13:43:37 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:38.019 13:43:37 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57857' 00:04:38.019 13:43:37 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 57857 00:04:38.019 13:43:37 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 57857 00:04:38.587 ************************************ 00:04:38.587 END TEST dpdk_mem_utility 00:04:38.587 ************************************ 00:04:38.587 00:04:38.587 real 0m1.474s 00:04:38.587 user 0m1.375s 00:04:38.587 sys 0m0.479s 00:04:38.587 13:43:37 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:38.587 13:43:37 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:38.587 13:43:37 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:38.588 13:43:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:38.588 13:43:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:38.588 13:43:37 -- common/autotest_common.sh@10 -- # set +x 00:04:38.588 ************************************ 00:04:38.588 START TEST event 00:04:38.588 ************************************ 00:04:38.588 13:43:37 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:38.588 * Looking for test storage... 00:04:38.588 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:38.588 13:43:37 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:38.588 13:43:37 event -- common/autotest_common.sh@1711 -- # lcov --version 00:04:38.588 13:43:37 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:38.848 13:43:37 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:38.848 13:43:37 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:38.848 13:43:37 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:38.848 13:43:37 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:38.848 13:43:37 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:38.848 13:43:37 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:38.848 13:43:37 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:38.848 13:43:37 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:38.848 13:43:37 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:38.848 13:43:37 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:38.848 13:43:37 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:38.848 13:43:37 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:38.848 13:43:37 event -- scripts/common.sh@344 -- # case "$op" in 00:04:38.848 13:43:37 event -- scripts/common.sh@345 -- # : 1 00:04:38.848 13:43:37 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:38.848 13:43:37 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:38.848 13:43:37 event -- scripts/common.sh@365 -- # decimal 1 00:04:38.848 13:43:37 event -- scripts/common.sh@353 -- # local d=1 00:04:38.848 13:43:37 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:38.848 13:43:37 event -- scripts/common.sh@355 -- # echo 1 00:04:38.848 13:43:37 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:38.848 13:43:38 event -- scripts/common.sh@366 -- # decimal 2 00:04:38.848 13:43:38 event -- scripts/common.sh@353 -- # local d=2 00:04:38.848 13:43:38 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:38.848 13:43:38 event -- scripts/common.sh@355 -- # echo 2 00:04:38.848 13:43:38 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:38.848 13:43:38 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:38.848 13:43:38 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:38.848 13:43:38 event -- scripts/common.sh@368 -- # return 0 00:04:38.848 13:43:38 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:38.848 13:43:38 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:38.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.848 --rc genhtml_branch_coverage=1 00:04:38.848 --rc genhtml_function_coverage=1 00:04:38.848 --rc genhtml_legend=1 00:04:38.848 --rc geninfo_all_blocks=1 00:04:38.848 --rc geninfo_unexecuted_blocks=1 00:04:38.848 00:04:38.848 ' 00:04:38.848 13:43:38 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:38.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.848 --rc genhtml_branch_coverage=1 00:04:38.848 --rc genhtml_function_coverage=1 00:04:38.848 --rc genhtml_legend=1 00:04:38.848 --rc geninfo_all_blocks=1 00:04:38.848 --rc geninfo_unexecuted_blocks=1 00:04:38.848 00:04:38.848 ' 00:04:38.848 13:43:38 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:38.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.848 --rc genhtml_branch_coverage=1 00:04:38.848 --rc genhtml_function_coverage=1 00:04:38.848 --rc genhtml_legend=1 00:04:38.848 --rc geninfo_all_blocks=1 00:04:38.848 --rc geninfo_unexecuted_blocks=1 00:04:38.848 00:04:38.848 ' 00:04:38.848 13:43:38 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:38.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.848 --rc genhtml_branch_coverage=1 00:04:38.848 --rc genhtml_function_coverage=1 00:04:38.848 --rc genhtml_legend=1 00:04:38.848 --rc geninfo_all_blocks=1 00:04:38.848 --rc geninfo_unexecuted_blocks=1 00:04:38.848 00:04:38.848 ' 00:04:38.848 13:43:38 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:38.848 13:43:38 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:38.848 13:43:38 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:38.848 13:43:38 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:38.848 13:43:38 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:38.848 13:43:38 event -- common/autotest_common.sh@10 -- # set +x 00:04:38.848 ************************************ 00:04:38.848 START TEST event_perf 00:04:38.848 ************************************ 00:04:38.848 13:43:38 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:38.848 Running I/O for 1 seconds...[2024-12-06 13:43:38.041101] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:04:38.848 [2024-12-06 13:43:38.041203] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57936 ] 00:04:38.848 [2024-12-06 13:43:38.171618] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:38.848 [2024-12-06 13:43:38.222655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:38.848 [2024-12-06 13:43:38.222797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:38.848 [2024-12-06 13:43:38.222923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:38.848 [2024-12-06 13:43:38.222925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.227 Running I/O for 1 seconds... 00:04:40.227 lcore 0: 133935 00:04:40.227 lcore 1: 133934 00:04:40.227 lcore 2: 133933 00:04:40.227 lcore 3: 133933 00:04:40.227 done. 00:04:40.227 00:04:40.227 real 0m1.255s 00:04:40.227 user 0m4.081s 00:04:40.227 ************************************ 00:04:40.227 END TEST event_perf 00:04:40.227 ************************************ 00:04:40.227 sys 0m0.056s 00:04:40.227 13:43:39 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:40.227 13:43:39 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:40.227 13:43:39 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:40.227 13:43:39 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:40.227 13:43:39 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:40.227 13:43:39 event -- common/autotest_common.sh@10 -- # set +x 00:04:40.227 ************************************ 00:04:40.227 START TEST event_reactor 00:04:40.227 ************************************ 00:04:40.227 13:43:39 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:40.227 [2024-12-06 13:43:39.347769] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:04:40.227 [2024-12-06 13:43:39.347833] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57969 ] 00:04:40.227 [2024-12-06 13:43:39.485195] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.227 [2024-12-06 13:43:39.528733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.606 test_start 00:04:41.606 oneshot 00:04:41.606 tick 100 00:04:41.606 tick 100 00:04:41.606 tick 250 00:04:41.606 tick 100 00:04:41.606 tick 100 00:04:41.606 tick 100 00:04:41.606 tick 250 00:04:41.606 tick 500 00:04:41.606 tick 100 00:04:41.606 tick 100 00:04:41.606 tick 250 00:04:41.606 tick 100 00:04:41.607 tick 100 00:04:41.607 test_end 00:04:41.607 ************************************ 00:04:41.607 END TEST event_reactor 00:04:41.607 ************************************ 00:04:41.607 00:04:41.607 real 0m1.246s 00:04:41.607 user 0m1.106s 00:04:41.607 sys 0m0.036s 00:04:41.607 13:43:40 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:41.607 13:43:40 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:41.607 13:43:40 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:41.607 13:43:40 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:41.607 13:43:40 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.607 13:43:40 event -- common/autotest_common.sh@10 -- # set +x 00:04:41.607 ************************************ 00:04:41.607 START TEST event_reactor_perf 00:04:41.607 ************************************ 00:04:41.607 13:43:40 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:41.607 [2024-12-06 13:43:40.650368] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:04:41.607 [2024-12-06 13:43:40.650467] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58003 ] 00:04:41.607 [2024-12-06 13:43:40.789858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.607 [2024-12-06 13:43:40.835412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.544 test_start 00:04:42.544 test_end 00:04:42.544 Performance: 470743 events per second 00:04:42.544 00:04:42.544 real 0m1.257s 00:04:42.544 user 0m1.111s 00:04:42.544 sys 0m0.041s 00:04:42.544 13:43:41 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:42.544 13:43:41 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:42.544 ************************************ 00:04:42.544 END TEST event_reactor_perf 00:04:42.544 ************************************ 00:04:42.544 13:43:41 event -- event/event.sh@49 -- # uname -s 00:04:42.544 13:43:41 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:42.544 13:43:41 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:42.544 13:43:41 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:42.544 13:43:41 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:42.544 13:43:41 event -- common/autotest_common.sh@10 -- # set +x 00:04:42.804 ************************************ 00:04:42.804 START TEST event_scheduler 00:04:42.804 ************************************ 00:04:42.804 13:43:41 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:42.804 * Looking for test storage... 00:04:42.804 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:04:42.804 13:43:42 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:42.804 13:43:42 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:04:42.804 13:43:42 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:42.804 13:43:42 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:42.804 13:43:42 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:42.804 13:43:42 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:42.804 13:43:42 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:42.804 13:43:42 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:42.804 13:43:42 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:42.804 13:43:42 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:42.804 13:43:42 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:42.804 13:43:42 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:42.804 13:43:42 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:42.804 13:43:42 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:42.804 13:43:42 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:42.804 13:43:42 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:42.804 13:43:42 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:42.804 13:43:42 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:42.804 13:43:42 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:42.804 13:43:42 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:42.804 13:43:42 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:42.804 13:43:42 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:42.804 13:43:42 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:42.804 13:43:42 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:42.804 13:43:42 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:42.804 13:43:42 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:42.804 13:43:42 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:42.804 13:43:42 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:42.804 13:43:42 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:42.804 13:43:42 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:42.804 13:43:42 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:42.804 13:43:42 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:42.804 13:43:42 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:42.804 13:43:42 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:42.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.804 --rc genhtml_branch_coverage=1 00:04:42.804 --rc genhtml_function_coverage=1 00:04:42.804 --rc genhtml_legend=1 00:04:42.804 --rc geninfo_all_blocks=1 00:04:42.804 --rc geninfo_unexecuted_blocks=1 00:04:42.804 00:04:42.804 ' 00:04:42.804 13:43:42 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:42.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.804 --rc genhtml_branch_coverage=1 00:04:42.804 --rc genhtml_function_coverage=1 00:04:42.804 --rc genhtml_legend=1 00:04:42.804 --rc geninfo_all_blocks=1 00:04:42.804 --rc geninfo_unexecuted_blocks=1 00:04:42.804 00:04:42.804 ' 00:04:42.804 13:43:42 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:42.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.804 --rc genhtml_branch_coverage=1 00:04:42.804 --rc genhtml_function_coverage=1 00:04:42.804 --rc genhtml_legend=1 00:04:42.804 --rc geninfo_all_blocks=1 00:04:42.804 --rc geninfo_unexecuted_blocks=1 00:04:42.804 00:04:42.804 ' 00:04:42.804 13:43:42 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:42.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.804 --rc genhtml_branch_coverage=1 00:04:42.804 --rc genhtml_function_coverage=1 00:04:42.804 --rc genhtml_legend=1 00:04:42.804 --rc geninfo_all_blocks=1 00:04:42.804 --rc geninfo_unexecuted_blocks=1 00:04:42.804 00:04:42.804 ' 00:04:42.804 13:43:42 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:42.804 13:43:42 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58074 00:04:42.804 13:43:42 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:42.804 13:43:42 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:42.804 13:43:42 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58074 00:04:42.804 13:43:42 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58074 ']' 00:04:42.804 13:43:42 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:42.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:42.804 13:43:42 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:42.804 13:43:42 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:42.804 13:43:42 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:42.804 13:43:42 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:42.804 [2024-12-06 13:43:42.203433] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:04:42.804 [2024-12-06 13:43:42.203538] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58074 ] 00:04:43.064 [2024-12-06 13:43:42.356591] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:43.064 [2024-12-06 13:43:42.415733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.064 [2024-12-06 13:43:42.415885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:43.064 [2024-12-06 13:43:42.416012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:43.064 [2024-12-06 13:43:42.416017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:43.064 13:43:42 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:43.064 13:43:42 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:04:43.064 13:43:42 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:43.064 13:43:42 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.064 13:43:42 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:43.323 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:43.323 POWER: Cannot set governor of lcore 0 to userspace 00:04:43.323 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:43.323 POWER: Cannot set governor of lcore 0 to performance 00:04:43.323 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:43.323 POWER: Cannot set governor of lcore 0 to userspace 00:04:43.323 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:43.323 POWER: Cannot set governor of lcore 0 to userspace 00:04:43.323 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:04:43.323 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:04:43.323 POWER: Unable to set Power Management Environment for lcore 0 00:04:43.323 [2024-12-06 13:43:42.470494] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:04:43.323 [2024-12-06 13:43:42.470511] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:04:43.323 [2024-12-06 13:43:42.470522] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:43.323 [2024-12-06 13:43:42.470537] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:43.323 [2024-12-06 13:43:42.470546] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:43.323 [2024-12-06 13:43:42.470555] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:43.323 13:43:42 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.323 13:43:42 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:43.323 13:43:42 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.323 13:43:42 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:43.323 [2024-12-06 13:43:42.536799] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:43.323 [2024-12-06 13:43:42.576513] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:43.323 13:43:42 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.323 13:43:42 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:43.323 13:43:42 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:43.323 13:43:42 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:43.323 13:43:42 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:43.323 ************************************ 00:04:43.323 START TEST scheduler_create_thread 00:04:43.323 ************************************ 00:04:43.323 13:43:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:04:43.323 13:43:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:43.323 13:43:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.323 13:43:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:43.323 2 00:04:43.323 13:43:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.323 13:43:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:43.323 13:43:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.323 13:43:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:43.323 3 00:04:43.323 13:43:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.323 13:43:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:43.323 13:43:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.323 13:43:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:43.323 4 00:04:43.323 13:43:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.323 13:43:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:43.323 13:43:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.323 13:43:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:43.323 5 00:04:43.323 13:43:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.323 13:43:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:43.323 13:43:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.323 13:43:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:43.323 6 00:04:43.323 13:43:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.323 13:43:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:43.323 13:43:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.323 13:43:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:43.323 7 00:04:43.323 13:43:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.323 13:43:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:43.323 13:43:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.323 13:43:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:43.323 8 00:04:43.323 13:43:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.323 13:43:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:43.323 13:43:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.323 13:43:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:43.323 9 00:04:43.323 13:43:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.323 13:43:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:43.323 13:43:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.323 13:43:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:43.323 10 00:04:43.323 13:43:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.323 13:43:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:43.323 13:43:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.323 13:43:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:43.323 13:43:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.323 13:43:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:43.323 13:43:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:43.323 13:43:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.323 13:43:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:43.891 13:43:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.891 13:43:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:43.891 13:43:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.891 13:43:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.268 13:43:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:45.268 13:43:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:45.268 13:43:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:45.268 13:43:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.268 13:43:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:46.677 ************************************ 00:04:46.677 END TEST scheduler_create_thread 00:04:46.677 ************************************ 00:04:46.677 13:43:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:46.677 00:04:46.677 real 0m3.094s 00:04:46.677 user 0m0.018s 00:04:46.677 sys 0m0.009s 00:04:46.677 13:43:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:46.677 13:43:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:46.677 13:43:45 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:46.677 13:43:45 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58074 00:04:46.677 13:43:45 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58074 ']' 00:04:46.677 13:43:45 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58074 00:04:46.677 13:43:45 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:04:46.677 13:43:45 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:46.677 13:43:45 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58074 00:04:46.677 killing process with pid 58074 00:04:46.677 13:43:45 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:46.677 13:43:45 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:46.677 13:43:45 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58074' 00:04:46.677 13:43:45 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58074 00:04:46.677 13:43:45 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58074 00:04:46.677 [2024-12-06 13:43:46.064129] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:46.936 00:04:46.936 real 0m4.332s 00:04:46.936 user 0m6.851s 00:04:46.936 sys 0m0.368s 00:04:46.936 13:43:46 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:46.936 ************************************ 00:04:46.936 END TEST event_scheduler 00:04:46.936 ************************************ 00:04:46.936 13:43:46 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:46.936 13:43:46 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:46.936 13:43:46 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:46.936 13:43:46 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:46.936 13:43:46 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:46.936 13:43:46 event -- common/autotest_common.sh@10 -- # set +x 00:04:46.936 ************************************ 00:04:46.936 START TEST app_repeat 00:04:46.936 ************************************ 00:04:46.936 13:43:46 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:04:47.195 13:43:46 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:47.195 13:43:46 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:47.195 13:43:46 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:47.195 13:43:46 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:47.195 13:43:46 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:47.195 13:43:46 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:47.195 13:43:46 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:47.195 Process app_repeat pid: 58166 00:04:47.195 13:43:46 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58166 00:04:47.195 13:43:46 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:47.195 13:43:46 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58166' 00:04:47.195 13:43:46 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:47.195 13:43:46 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:47.195 spdk_app_start Round 0 00:04:47.195 13:43:46 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:47.195 13:43:46 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58166 /var/tmp/spdk-nbd.sock 00:04:47.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:47.195 13:43:46 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58166 ']' 00:04:47.195 13:43:46 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:47.195 13:43:46 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:47.195 13:43:46 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:47.195 13:43:46 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:47.195 13:43:46 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:47.195 [2024-12-06 13:43:46.374546] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:04:47.195 [2024-12-06 13:43:46.374640] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58166 ] 00:04:47.195 [2024-12-06 13:43:46.517888] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:47.195 [2024-12-06 13:43:46.570664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:47.195 [2024-12-06 13:43:46.570679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.454 [2024-12-06 13:43:46.643256] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:47.454 13:43:46 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:47.454 13:43:46 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:47.454 13:43:46 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:47.712 Malloc0 00:04:47.712 13:43:46 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:47.971 Malloc1 00:04:47.971 13:43:47 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:47.971 13:43:47 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:47.971 13:43:47 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:47.971 13:43:47 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:47.971 13:43:47 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:47.971 13:43:47 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:47.971 13:43:47 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:47.971 13:43:47 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:47.971 13:43:47 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:47.971 13:43:47 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:47.971 13:43:47 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:47.971 13:43:47 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:47.971 13:43:47 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:47.971 13:43:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:47.971 13:43:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:47.971 13:43:47 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:48.230 /dev/nbd0 00:04:48.230 13:43:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:48.230 13:43:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:48.230 13:43:47 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:48.230 13:43:47 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:48.230 13:43:47 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:48.230 13:43:47 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:48.230 13:43:47 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:48.230 13:43:47 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:48.230 13:43:47 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:48.230 13:43:47 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:48.230 13:43:47 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:48.230 1+0 records in 00:04:48.230 1+0 records out 00:04:48.230 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000169386 s, 24.2 MB/s 00:04:48.230 13:43:47 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:48.230 13:43:47 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:48.230 13:43:47 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:48.230 13:43:47 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:48.230 13:43:47 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:48.230 13:43:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:48.230 13:43:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:48.230 13:43:47 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:48.488 /dev/nbd1 00:04:48.488 13:43:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:48.488 13:43:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:48.488 13:43:47 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:48.488 13:43:47 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:48.488 13:43:47 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:48.488 13:43:47 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:48.488 13:43:47 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:48.488 13:43:47 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:48.488 13:43:47 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:48.488 13:43:47 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:48.488 13:43:47 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:48.488 1+0 records in 00:04:48.488 1+0 records out 00:04:48.488 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000372772 s, 11.0 MB/s 00:04:48.488 13:43:47 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:48.488 13:43:47 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:48.488 13:43:47 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:48.488 13:43:47 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:48.488 13:43:47 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:48.488 13:43:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:48.488 13:43:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:48.488 13:43:47 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:48.488 13:43:47 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:48.489 13:43:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:48.746 13:43:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:48.746 { 00:04:48.746 "nbd_device": "/dev/nbd0", 00:04:48.746 "bdev_name": "Malloc0" 00:04:48.746 }, 00:04:48.746 { 00:04:48.746 "nbd_device": "/dev/nbd1", 00:04:48.746 "bdev_name": "Malloc1" 00:04:48.746 } 00:04:48.746 ]' 00:04:48.746 13:43:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:48.746 13:43:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:48.746 { 00:04:48.746 "nbd_device": "/dev/nbd0", 00:04:48.746 "bdev_name": "Malloc0" 00:04:48.746 }, 00:04:48.746 { 00:04:48.746 "nbd_device": "/dev/nbd1", 00:04:48.746 "bdev_name": "Malloc1" 00:04:48.746 } 00:04:48.746 ]' 00:04:49.005 13:43:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:49.005 /dev/nbd1' 00:04:49.005 13:43:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:49.005 13:43:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:49.005 /dev/nbd1' 00:04:49.005 13:43:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:49.005 13:43:48 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:49.005 13:43:48 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:49.005 13:43:48 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:49.005 13:43:48 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:49.005 13:43:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:49.005 13:43:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:49.005 13:43:48 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:49.005 13:43:48 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:49.005 13:43:48 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:49.005 13:43:48 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:49.005 256+0 records in 00:04:49.005 256+0 records out 00:04:49.005 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00915602 s, 115 MB/s 00:04:49.005 13:43:48 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:49.005 13:43:48 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:49.005 256+0 records in 00:04:49.005 256+0 records out 00:04:49.005 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0255336 s, 41.1 MB/s 00:04:49.005 13:43:48 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:49.005 13:43:48 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:49.005 256+0 records in 00:04:49.005 256+0 records out 00:04:49.005 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0290489 s, 36.1 MB/s 00:04:49.005 13:43:48 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:49.005 13:43:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:49.005 13:43:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:49.005 13:43:48 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:49.005 13:43:48 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:49.005 13:43:48 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:49.005 13:43:48 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:49.005 13:43:48 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:49.005 13:43:48 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:49.005 13:43:48 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:49.005 13:43:48 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:49.005 13:43:48 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:49.005 13:43:48 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:49.005 13:43:48 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:49.005 13:43:48 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:49.005 13:43:48 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:49.005 13:43:48 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:49.005 13:43:48 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:49.005 13:43:48 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:49.265 13:43:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:49.265 13:43:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:49.265 13:43:48 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:49.265 13:43:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:49.265 13:43:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:49.265 13:43:48 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:49.265 13:43:48 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:49.265 13:43:48 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:49.265 13:43:48 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:49.265 13:43:48 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:49.524 13:43:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:49.524 13:43:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:49.524 13:43:48 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:49.524 13:43:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:49.524 13:43:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:49.524 13:43:48 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:49.524 13:43:48 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:49.524 13:43:48 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:49.524 13:43:48 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:49.524 13:43:48 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:49.524 13:43:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:49.784 13:43:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:49.784 13:43:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:49.784 13:43:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:49.784 13:43:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:49.784 13:43:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:49.784 13:43:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:49.784 13:43:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:49.784 13:43:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:49.784 13:43:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:49.784 13:43:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:49.784 13:43:49 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:49.784 13:43:49 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:49.784 13:43:49 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:50.043 13:43:49 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:50.302 [2024-12-06 13:43:49.641092] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:50.302 [2024-12-06 13:43:49.679020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:50.302 [2024-12-06 13:43:49.679037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.562 [2024-12-06 13:43:49.751301] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:50.562 [2024-12-06 13:43:49.751400] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:50.562 [2024-12-06 13:43:49.751413] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:53.097 spdk_app_start Round 1 00:04:53.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:53.097 13:43:52 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:53.097 13:43:52 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:53.097 13:43:52 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58166 /var/tmp/spdk-nbd.sock 00:04:53.097 13:43:52 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58166 ']' 00:04:53.098 13:43:52 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:53.098 13:43:52 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:53.098 13:43:52 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:53.098 13:43:52 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:53.098 13:43:52 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:53.357 13:43:52 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:53.357 13:43:52 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:53.357 13:43:52 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:53.616 Malloc0 00:04:53.616 13:43:52 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:53.875 Malloc1 00:04:53.875 13:43:53 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:53.875 13:43:53 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:53.875 13:43:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:53.875 13:43:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:53.875 13:43:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:53.875 13:43:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:53.875 13:43:53 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:53.875 13:43:53 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:53.875 13:43:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:53.875 13:43:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:53.875 13:43:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:53.875 13:43:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:53.875 13:43:53 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:53.875 13:43:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:53.875 13:43:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:53.875 13:43:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:54.134 /dev/nbd0 00:04:54.134 13:43:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:54.134 13:43:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:54.134 13:43:53 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:54.134 13:43:53 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:54.134 13:43:53 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:54.134 13:43:53 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:54.134 13:43:53 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:54.134 13:43:53 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:54.134 13:43:53 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:54.134 13:43:53 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:54.134 13:43:53 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:54.134 1+0 records in 00:04:54.134 1+0 records out 00:04:54.134 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000281664 s, 14.5 MB/s 00:04:54.134 13:43:53 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:54.134 13:43:53 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:54.134 13:43:53 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:54.134 13:43:53 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:54.134 13:43:53 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:54.134 13:43:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:54.134 13:43:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:54.134 13:43:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:54.394 /dev/nbd1 00:04:54.394 13:43:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:54.394 13:43:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:54.394 13:43:53 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:54.394 13:43:53 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:54.394 13:43:53 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:54.394 13:43:53 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:54.394 13:43:53 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:54.394 13:43:53 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:54.394 13:43:53 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:54.394 13:43:53 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:54.394 13:43:53 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:54.394 1+0 records in 00:04:54.394 1+0 records out 00:04:54.394 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000264592 s, 15.5 MB/s 00:04:54.394 13:43:53 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:54.394 13:43:53 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:54.394 13:43:53 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:54.394 13:43:53 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:54.394 13:43:53 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:54.394 13:43:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:54.394 13:43:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:54.394 13:43:53 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:54.394 13:43:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:54.394 13:43:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:54.653 13:43:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:54.653 { 00:04:54.653 "nbd_device": "/dev/nbd0", 00:04:54.653 "bdev_name": "Malloc0" 00:04:54.653 }, 00:04:54.653 { 00:04:54.653 "nbd_device": "/dev/nbd1", 00:04:54.653 "bdev_name": "Malloc1" 00:04:54.653 } 00:04:54.653 ]' 00:04:54.653 13:43:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:54.653 { 00:04:54.653 "nbd_device": "/dev/nbd0", 00:04:54.653 "bdev_name": "Malloc0" 00:04:54.653 }, 00:04:54.653 { 00:04:54.653 "nbd_device": "/dev/nbd1", 00:04:54.653 "bdev_name": "Malloc1" 00:04:54.653 } 00:04:54.653 ]' 00:04:54.653 13:43:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:54.653 13:43:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:54.653 /dev/nbd1' 00:04:54.653 13:43:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:54.653 /dev/nbd1' 00:04:54.653 13:43:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:54.653 13:43:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:54.653 13:43:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:54.653 13:43:53 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:54.653 13:43:53 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:54.653 13:43:53 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:54.653 13:43:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:54.653 13:43:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:54.653 13:43:53 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:54.653 13:43:53 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:54.653 13:43:53 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:54.653 13:43:53 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:54.653 256+0 records in 00:04:54.653 256+0 records out 00:04:54.654 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00852392 s, 123 MB/s 00:04:54.654 13:43:53 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:54.654 13:43:53 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:54.654 256+0 records in 00:04:54.654 256+0 records out 00:04:54.654 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0252995 s, 41.4 MB/s 00:04:54.654 13:43:53 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:54.654 13:43:53 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:54.654 256+0 records in 00:04:54.654 256+0 records out 00:04:54.654 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0282017 s, 37.2 MB/s 00:04:54.654 13:43:54 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:54.654 13:43:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:54.654 13:43:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:54.654 13:43:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:54.654 13:43:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:54.654 13:43:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:54.654 13:43:54 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:54.654 13:43:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:54.654 13:43:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:54.654 13:43:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:54.654 13:43:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:54.654 13:43:54 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:54.654 13:43:54 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:54.654 13:43:54 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:54.654 13:43:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:54.654 13:43:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:54.654 13:43:54 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:54.654 13:43:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:54.654 13:43:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:54.913 13:43:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:54.913 13:43:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:54.913 13:43:54 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:54.913 13:43:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:54.913 13:43:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:54.913 13:43:54 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:54.913 13:43:54 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:54.913 13:43:54 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:54.913 13:43:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:54.913 13:43:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:55.172 13:43:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:55.172 13:43:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:55.172 13:43:54 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:55.172 13:43:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:55.172 13:43:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:55.172 13:43:54 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:55.172 13:43:54 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:55.172 13:43:54 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:55.172 13:43:54 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:55.172 13:43:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:55.172 13:43:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:55.432 13:43:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:55.432 13:43:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:55.432 13:43:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:55.432 13:43:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:55.432 13:43:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:55.432 13:43:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:55.432 13:43:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:55.432 13:43:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:55.432 13:43:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:55.432 13:43:54 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:55.432 13:43:54 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:55.432 13:43:54 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:55.432 13:43:54 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:56.001 13:43:55 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:56.001 [2024-12-06 13:43:55.373021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:56.259 [2024-12-06 13:43:55.413164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:56.259 [2024-12-06 13:43:55.413181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.259 [2024-12-06 13:43:55.485482] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:56.259 [2024-12-06 13:43:55.485581] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:56.259 [2024-12-06 13:43:55.485594] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:58.795 spdk_app_start Round 2 00:04:58.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:58.795 13:43:58 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:58.795 13:43:58 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:58.795 13:43:58 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58166 /var/tmp/spdk-nbd.sock 00:04:58.795 13:43:58 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58166 ']' 00:04:58.795 13:43:58 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:58.795 13:43:58 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:58.795 13:43:58 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:58.795 13:43:58 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:58.795 13:43:58 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:59.054 13:43:58 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:59.054 13:43:58 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:59.054 13:43:58 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:59.312 Malloc0 00:04:59.312 13:43:58 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:59.572 Malloc1 00:04:59.572 13:43:58 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:59.572 13:43:58 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:59.572 13:43:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:59.572 13:43:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:59.572 13:43:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.572 13:43:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:59.572 13:43:58 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:59.572 13:43:58 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:59.572 13:43:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:59.572 13:43:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:59.572 13:43:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.572 13:43:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:59.572 13:43:58 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:59.572 13:43:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:59.572 13:43:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:59.572 13:43:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:59.832 /dev/nbd0 00:04:59.832 13:43:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:59.832 13:43:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:59.832 13:43:59 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:59.832 13:43:59 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:59.832 13:43:59 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:59.832 13:43:59 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:59.832 13:43:59 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:59.832 13:43:59 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:59.832 13:43:59 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:59.832 13:43:59 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:59.832 13:43:59 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:59.832 1+0 records in 00:04:59.832 1+0 records out 00:04:59.832 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000191756 s, 21.4 MB/s 00:04:59.832 13:43:59 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:59.832 13:43:59 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:59.832 13:43:59 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:59.832 13:43:59 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:59.832 13:43:59 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:59.832 13:43:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:59.832 13:43:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:59.832 13:43:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:00.091 /dev/nbd1 00:05:00.091 13:43:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:00.091 13:43:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:00.091 13:43:59 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:00.091 13:43:59 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:00.091 13:43:59 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:00.091 13:43:59 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:00.091 13:43:59 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:00.091 13:43:59 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:00.091 13:43:59 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:00.091 13:43:59 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:00.091 13:43:59 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:00.091 1+0 records in 00:05:00.091 1+0 records out 00:05:00.091 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000267624 s, 15.3 MB/s 00:05:00.091 13:43:59 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:00.091 13:43:59 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:00.092 13:43:59 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:00.092 13:43:59 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:00.092 13:43:59 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:00.092 13:43:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:00.092 13:43:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:00.092 13:43:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:00.092 13:43:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.092 13:43:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:00.661 13:43:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:00.661 { 00:05:00.661 "nbd_device": "/dev/nbd0", 00:05:00.661 "bdev_name": "Malloc0" 00:05:00.661 }, 00:05:00.661 { 00:05:00.661 "nbd_device": "/dev/nbd1", 00:05:00.661 "bdev_name": "Malloc1" 00:05:00.661 } 00:05:00.661 ]' 00:05:00.661 13:43:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:00.661 13:43:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:00.661 { 00:05:00.661 "nbd_device": "/dev/nbd0", 00:05:00.661 "bdev_name": "Malloc0" 00:05:00.661 }, 00:05:00.661 { 00:05:00.661 "nbd_device": "/dev/nbd1", 00:05:00.661 "bdev_name": "Malloc1" 00:05:00.661 } 00:05:00.661 ]' 00:05:00.661 13:43:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:00.661 /dev/nbd1' 00:05:00.661 13:43:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:00.661 /dev/nbd1' 00:05:00.661 13:43:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:00.661 13:43:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:00.661 13:43:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:00.661 13:43:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:00.661 13:43:59 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:00.661 13:43:59 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:00.661 13:43:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:00.661 13:43:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:00.661 13:43:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:00.661 13:43:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:00.661 13:43:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:00.661 13:43:59 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:00.661 256+0 records in 00:05:00.661 256+0 records out 00:05:00.661 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.010616 s, 98.8 MB/s 00:05:00.661 13:43:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:00.661 13:43:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:00.661 256+0 records in 00:05:00.661 256+0 records out 00:05:00.661 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0218906 s, 47.9 MB/s 00:05:00.661 13:43:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:00.661 13:43:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:00.661 256+0 records in 00:05:00.661 256+0 records out 00:05:00.661 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0253702 s, 41.3 MB/s 00:05:00.661 13:43:59 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:00.661 13:43:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:00.661 13:43:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:00.661 13:43:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:00.661 13:43:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:00.661 13:43:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:00.661 13:43:59 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:00.661 13:43:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:00.661 13:43:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:00.661 13:43:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:00.661 13:43:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:00.661 13:43:59 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:00.661 13:43:59 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:00.661 13:43:59 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.661 13:43:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:00.661 13:43:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:00.662 13:43:59 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:00.662 13:43:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:00.662 13:43:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:00.921 13:44:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:00.921 13:44:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:00.921 13:44:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:00.921 13:44:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:00.921 13:44:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:00.921 13:44:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:00.921 13:44:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:00.921 13:44:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:00.921 13:44:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:00.921 13:44:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:01.181 13:44:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:01.181 13:44:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:01.181 13:44:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:01.181 13:44:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:01.181 13:44:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:01.181 13:44:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:01.181 13:44:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:01.181 13:44:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:01.181 13:44:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:01.181 13:44:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.181 13:44:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:01.440 13:44:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:01.440 13:44:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:01.440 13:44:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:01.699 13:44:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:01.699 13:44:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:01.699 13:44:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:01.699 13:44:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:01.699 13:44:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:01.699 13:44:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:01.699 13:44:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:01.699 13:44:00 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:01.699 13:44:00 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:01.699 13:44:00 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:01.958 13:44:01 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:02.217 [2024-12-06 13:44:01.422606] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:02.217 [2024-12-06 13:44:01.460528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:02.217 [2024-12-06 13:44:01.460544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.217 [2024-12-06 13:44:01.531379] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:02.217 [2024-12-06 13:44:01.531483] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:02.217 [2024-12-06 13:44:01.531496] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:05.530 13:44:04 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58166 /var/tmp/spdk-nbd.sock 00:05:05.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:05.530 13:44:04 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58166 ']' 00:05:05.530 13:44:04 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:05.530 13:44:04 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:05.530 13:44:04 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:05.530 13:44:04 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:05.530 13:44:04 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:05.530 13:44:04 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:05.530 13:44:04 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:05.531 13:44:04 event.app_repeat -- event/event.sh@39 -- # killprocess 58166 00:05:05.531 13:44:04 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58166 ']' 00:05:05.531 13:44:04 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58166 00:05:05.531 13:44:04 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:05.531 13:44:04 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:05.531 13:44:04 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58166 00:05:05.531 killing process with pid 58166 00:05:05.531 13:44:04 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:05.531 13:44:04 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:05.531 13:44:04 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58166' 00:05:05.531 13:44:04 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58166 00:05:05.531 13:44:04 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58166 00:05:05.531 spdk_app_start is called in Round 0. 00:05:05.531 Shutdown signal received, stop current app iteration 00:05:05.531 Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 reinitialization... 00:05:05.531 spdk_app_start is called in Round 1. 00:05:05.531 Shutdown signal received, stop current app iteration 00:05:05.531 Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 reinitialization... 00:05:05.531 spdk_app_start is called in Round 2. 00:05:05.531 Shutdown signal received, stop current app iteration 00:05:05.531 Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 reinitialization... 00:05:05.531 spdk_app_start is called in Round 3. 00:05:05.531 Shutdown signal received, stop current app iteration 00:05:05.531 ************************************ 00:05:05.531 END TEST app_repeat 00:05:05.531 ************************************ 00:05:05.531 13:44:04 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:05.531 13:44:04 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:05.531 00:05:05.531 real 0m18.394s 00:05:05.531 user 0m41.590s 00:05:05.531 sys 0m2.699s 00:05:05.531 13:44:04 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:05.531 13:44:04 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:05.531 13:44:04 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:05.531 13:44:04 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:05.531 13:44:04 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:05.531 13:44:04 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.531 13:44:04 event -- common/autotest_common.sh@10 -- # set +x 00:05:05.531 ************************************ 00:05:05.531 START TEST cpu_locks 00:05:05.531 ************************************ 00:05:05.531 13:44:04 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:05.531 * Looking for test storage... 00:05:05.531 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:05.531 13:44:04 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:05.531 13:44:04 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:05:05.531 13:44:04 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:05.789 13:44:04 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:05.789 13:44:04 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:05.789 13:44:04 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:05.789 13:44:04 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:05.789 13:44:04 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:05.789 13:44:04 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:05.789 13:44:04 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:05.789 13:44:04 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:05.789 13:44:04 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:05.789 13:44:04 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:05.789 13:44:04 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:05.789 13:44:04 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:05.789 13:44:04 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:05.789 13:44:04 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:05.789 13:44:04 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:05.789 13:44:04 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:05.789 13:44:04 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:05.789 13:44:04 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:05.789 13:44:04 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:05.789 13:44:04 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:05.789 13:44:04 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:05.789 13:44:04 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:05.789 13:44:04 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:05.789 13:44:04 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:05.789 13:44:04 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:05.789 13:44:04 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:05.789 13:44:04 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:05.789 13:44:04 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:05.789 13:44:04 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:05.789 13:44:04 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:05.789 13:44:04 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:05.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.789 --rc genhtml_branch_coverage=1 00:05:05.789 --rc genhtml_function_coverage=1 00:05:05.789 --rc genhtml_legend=1 00:05:05.789 --rc geninfo_all_blocks=1 00:05:05.789 --rc geninfo_unexecuted_blocks=1 00:05:05.789 00:05:05.789 ' 00:05:05.789 13:44:04 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:05.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.789 --rc genhtml_branch_coverage=1 00:05:05.789 --rc genhtml_function_coverage=1 00:05:05.789 --rc genhtml_legend=1 00:05:05.789 --rc geninfo_all_blocks=1 00:05:05.789 --rc geninfo_unexecuted_blocks=1 00:05:05.789 00:05:05.789 ' 00:05:05.789 13:44:04 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:05.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.789 --rc genhtml_branch_coverage=1 00:05:05.789 --rc genhtml_function_coverage=1 00:05:05.789 --rc genhtml_legend=1 00:05:05.789 --rc geninfo_all_blocks=1 00:05:05.789 --rc geninfo_unexecuted_blocks=1 00:05:05.789 00:05:05.789 ' 00:05:05.789 13:44:04 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:05.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.789 --rc genhtml_branch_coverage=1 00:05:05.789 --rc genhtml_function_coverage=1 00:05:05.789 --rc genhtml_legend=1 00:05:05.789 --rc geninfo_all_blocks=1 00:05:05.789 --rc geninfo_unexecuted_blocks=1 00:05:05.789 00:05:05.789 ' 00:05:05.789 13:44:04 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:05.789 13:44:04 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:05.789 13:44:04 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:05.789 13:44:04 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:05.789 13:44:04 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:05.789 13:44:04 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.789 13:44:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:05.789 ************************************ 00:05:05.789 START TEST default_locks 00:05:05.789 ************************************ 00:05:05.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.789 13:44:04 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:05.789 13:44:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58599 00:05:05.789 13:44:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58599 00:05:05.789 13:44:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:05.789 13:44:04 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58599 ']' 00:05:05.789 13:44:04 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:05.789 13:44:04 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:05.789 13:44:04 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:05.789 13:44:04 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:05.789 13:44:04 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:05.789 [2024-12-06 13:44:05.036977] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:05:05.789 [2024-12-06 13:44:05.037209] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58599 ] 00:05:05.789 [2024-12-06 13:44:05.176627] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.047 [2024-12-06 13:44:05.228579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.047 [2024-12-06 13:44:05.317722] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:06.305 13:44:05 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:06.305 13:44:05 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:06.305 13:44:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58599 00:05:06.305 13:44:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58599 00:05:06.305 13:44:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:06.563 13:44:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58599 00:05:06.563 13:44:05 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58599 ']' 00:05:06.563 13:44:05 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58599 00:05:06.563 13:44:05 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:06.563 13:44:05 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:06.563 13:44:05 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58599 00:05:06.563 killing process with pid 58599 00:05:06.563 13:44:05 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:06.563 13:44:05 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:06.563 13:44:05 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58599' 00:05:06.563 13:44:05 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58599 00:05:06.563 13:44:05 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58599 00:05:07.130 13:44:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58599 00:05:07.130 13:44:06 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:07.130 13:44:06 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58599 00:05:07.130 13:44:06 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:07.130 13:44:06 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:07.130 13:44:06 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:07.130 13:44:06 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:07.130 13:44:06 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58599 00:05:07.130 13:44:06 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58599 ']' 00:05:07.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:07.130 13:44:06 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:07.130 13:44:06 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:07.130 13:44:06 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:07.130 13:44:06 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:07.130 ERROR: process (pid: 58599) is no longer running 00:05:07.130 13:44:06 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:07.130 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58599) - No such process 00:05:07.130 13:44:06 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:07.130 13:44:06 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:07.130 13:44:06 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:07.130 13:44:06 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:07.130 13:44:06 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:07.130 13:44:06 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:07.130 13:44:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:07.130 13:44:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:07.130 13:44:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:07.130 13:44:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:07.130 00:05:07.130 real 0m1.364s 00:05:07.130 user 0m1.243s 00:05:07.130 sys 0m0.498s 00:05:07.130 ************************************ 00:05:07.130 END TEST default_locks 00:05:07.130 ************************************ 00:05:07.130 13:44:06 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:07.130 13:44:06 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:07.130 13:44:06 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:07.130 13:44:06 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:07.130 13:44:06 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:07.130 13:44:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:07.130 ************************************ 00:05:07.130 START TEST default_locks_via_rpc 00:05:07.130 ************************************ 00:05:07.130 13:44:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:07.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:07.130 13:44:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58638 00:05:07.130 13:44:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58638 00:05:07.130 13:44:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:07.130 13:44:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58638 ']' 00:05:07.130 13:44:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:07.130 13:44:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:07.130 13:44:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:07.130 13:44:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:07.130 13:44:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.130 [2024-12-06 13:44:06.471192] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:05:07.130 [2024-12-06 13:44:06.471285] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58638 ] 00:05:07.388 [2024-12-06 13:44:06.610836] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.388 [2024-12-06 13:44:06.658520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.388 [2024-12-06 13:44:06.747311] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:08.323 13:44:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:08.323 13:44:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:08.323 13:44:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:08.323 13:44:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:08.323 13:44:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:08.323 13:44:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.323 13:44:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:08.323 13:44:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:08.323 13:44:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:08.323 13:44:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:08.323 13:44:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:08.323 13:44:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:08.323 13:44:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:08.323 13:44:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.323 13:44:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58638 00:05:08.323 13:44:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58638 00:05:08.323 13:44:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:08.582 13:44:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58638 00:05:08.582 13:44:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58638 ']' 00:05:08.582 13:44:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58638 00:05:08.582 13:44:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:08.582 13:44:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:08.582 13:44:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58638 00:05:08.582 killing process with pid 58638 00:05:08.582 13:44:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:08.582 13:44:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:08.582 13:44:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58638' 00:05:08.582 13:44:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58638 00:05:08.582 13:44:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58638 00:05:09.150 ************************************ 00:05:09.150 END TEST default_locks_via_rpc 00:05:09.150 ************************************ 00:05:09.150 00:05:09.150 real 0m1.979s 00:05:09.150 user 0m2.053s 00:05:09.150 sys 0m0.617s 00:05:09.150 13:44:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:09.150 13:44:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.150 13:44:08 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:09.150 13:44:08 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:09.150 13:44:08 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:09.150 13:44:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:09.150 ************************************ 00:05:09.150 START TEST non_locking_app_on_locked_coremask 00:05:09.150 ************************************ 00:05:09.150 13:44:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:09.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.150 13:44:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58689 00:05:09.150 13:44:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58689 /var/tmp/spdk.sock 00:05:09.150 13:44:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58689 ']' 00:05:09.150 13:44:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:09.150 13:44:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.150 13:44:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:09.150 13:44:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.150 13:44:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:09.150 13:44:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:09.150 [2024-12-06 13:44:08.504044] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:05:09.150 [2024-12-06 13:44:08.504183] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58689 ] 00:05:09.409 [2024-12-06 13:44:08.642157] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.409 [2024-12-06 13:44:08.687534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.409 [2024-12-06 13:44:08.774198] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:10.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:10.346 13:44:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:10.346 13:44:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:10.346 13:44:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58705 00:05:10.346 13:44:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:10.346 13:44:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58705 /var/tmp/spdk2.sock 00:05:10.346 13:44:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58705 ']' 00:05:10.346 13:44:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:10.346 13:44:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:10.346 13:44:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:10.346 13:44:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:10.346 13:44:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:10.346 [2024-12-06 13:44:09.504343] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:05:10.346 [2024-12-06 13:44:09.504663] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58705 ] 00:05:10.346 [2024-12-06 13:44:09.657291] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:10.346 [2024-12-06 13:44:09.657337] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.605 [2024-12-06 13:44:09.757912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.605 [2024-12-06 13:44:09.927788] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:11.181 13:44:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:11.181 13:44:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:11.181 13:44:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58689 00:05:11.181 13:44:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58689 00:05:11.181 13:44:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:12.157 13:44:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58689 00:05:12.157 13:44:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58689 ']' 00:05:12.157 13:44:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58689 00:05:12.157 13:44:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:12.157 13:44:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:12.157 13:44:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58689 00:05:12.157 13:44:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:12.157 13:44:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:12.157 killing process with pid 58689 00:05:12.157 13:44:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58689' 00:05:12.157 13:44:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58689 00:05:12.157 13:44:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58689 00:05:13.097 13:44:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58705 00:05:13.097 13:44:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58705 ']' 00:05:13.097 13:44:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58705 00:05:13.097 13:44:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:13.097 13:44:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:13.097 13:44:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58705 00:05:13.097 killing process with pid 58705 00:05:13.097 13:44:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:13.098 13:44:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:13.098 13:44:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58705' 00:05:13.098 13:44:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58705 00:05:13.098 13:44:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58705 00:05:13.358 ************************************ 00:05:13.358 END TEST non_locking_app_on_locked_coremask 00:05:13.358 ************************************ 00:05:13.358 00:05:13.358 real 0m4.286s 00:05:13.358 user 0m4.594s 00:05:13.358 sys 0m1.217s 00:05:13.358 13:44:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:13.358 13:44:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:13.617 13:44:12 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:13.617 13:44:12 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:13.617 13:44:12 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:13.617 13:44:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:13.617 ************************************ 00:05:13.617 START TEST locking_app_on_unlocked_coremask 00:05:13.617 ************************************ 00:05:13.618 13:44:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:13.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:13.618 13:44:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=58778 00:05:13.618 13:44:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 58778 /var/tmp/spdk.sock 00:05:13.618 13:44:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:13.618 13:44:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58778 ']' 00:05:13.618 13:44:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:13.618 13:44:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:13.618 13:44:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:13.618 13:44:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:13.618 13:44:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:13.618 [2024-12-06 13:44:12.847138] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:05:13.618 [2024-12-06 13:44:12.847428] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58778 ] 00:05:13.618 [2024-12-06 13:44:12.992377] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:13.618 [2024-12-06 13:44:12.992613] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.878 [2024-12-06 13:44:13.042814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.878 [2024-12-06 13:44:13.130355] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:14.447 13:44:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:14.447 13:44:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:14.447 13:44:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:14.447 13:44:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=58794 00:05:14.447 13:44:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 58794 /var/tmp/spdk2.sock 00:05:14.447 13:44:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58794 ']' 00:05:14.447 13:44:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:14.447 13:44:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:14.447 13:44:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:14.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:14.447 13:44:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:14.447 13:44:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:14.707 [2024-12-06 13:44:13.879156] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:05:14.707 [2024-12-06 13:44:13.879386] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58794 ] 00:05:14.707 [2024-12-06 13:44:14.027562] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.967 [2024-12-06 13:44:14.136539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.967 [2024-12-06 13:44:14.310444] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:15.537 13:44:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:15.537 13:44:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:15.537 13:44:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 58794 00:05:15.537 13:44:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58794 00:05:15.537 13:44:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:16.474 13:44:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 58778 00:05:16.474 13:44:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58778 ']' 00:05:16.474 13:44:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 58778 00:05:16.474 13:44:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:16.474 13:44:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:16.474 13:44:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58778 00:05:16.474 killing process with pid 58778 00:05:16.474 13:44:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:16.474 13:44:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:16.474 13:44:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58778' 00:05:16.474 13:44:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 58778 00:05:16.474 13:44:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 58778 00:05:17.413 13:44:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 58794 00:05:17.413 13:44:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58794 ']' 00:05:17.413 13:44:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 58794 00:05:17.413 13:44:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:17.413 13:44:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:17.413 13:44:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58794 00:05:17.413 killing process with pid 58794 00:05:17.413 13:44:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:17.413 13:44:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:17.413 13:44:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58794' 00:05:17.413 13:44:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 58794 00:05:17.413 13:44:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 58794 00:05:17.983 00:05:17.983 real 0m4.340s 00:05:17.983 user 0m4.650s 00:05:17.983 sys 0m1.242s 00:05:17.983 13:44:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:17.983 ************************************ 00:05:17.983 END TEST locking_app_on_unlocked_coremask 00:05:17.983 ************************************ 00:05:17.983 13:44:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:17.983 13:44:17 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:17.983 13:44:17 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:17.983 13:44:17 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:17.983 13:44:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:17.983 ************************************ 00:05:17.983 START TEST locking_app_on_locked_coremask 00:05:17.983 ************************************ 00:05:17.983 13:44:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:17.983 13:44:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=58866 00:05:17.983 13:44:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 58866 /var/tmp/spdk.sock 00:05:17.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.983 13:44:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58866 ']' 00:05:17.983 13:44:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:17.983 13:44:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.983 13:44:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:17.983 13:44:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.983 13:44:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:17.984 13:44:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:17.984 [2024-12-06 13:44:17.242486] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:05:17.984 [2024-12-06 13:44:17.242609] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58866 ] 00:05:18.243 [2024-12-06 13:44:17.389238] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.243 [2024-12-06 13:44:17.434737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.244 [2024-12-06 13:44:17.521044] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:18.503 13:44:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:18.503 13:44:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:18.503 13:44:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=58869 00:05:18.503 13:44:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:18.503 13:44:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 58869 /var/tmp/spdk2.sock 00:05:18.503 13:44:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:18.503 13:44:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58869 /var/tmp/spdk2.sock 00:05:18.503 13:44:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:18.503 13:44:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:18.503 13:44:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:18.503 13:44:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:18.503 13:44:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 58869 /var/tmp/spdk2.sock 00:05:18.503 13:44:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58869 ']' 00:05:18.503 13:44:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:18.503 13:44:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:18.503 13:44:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:18.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:18.503 13:44:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:18.503 13:44:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:18.503 [2024-12-06 13:44:17.828947] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:05:18.503 [2024-12-06 13:44:17.829234] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58869 ] 00:05:18.762 [2024-12-06 13:44:17.981456] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 58866 has claimed it. 00:05:18.762 [2024-12-06 13:44:17.981516] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:19.330 ERROR: process (pid: 58869) is no longer running 00:05:19.330 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58869) - No such process 00:05:19.330 13:44:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:19.330 13:44:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:19.330 13:44:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:19.330 13:44:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:19.330 13:44:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:19.330 13:44:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:19.330 13:44:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 58866 00:05:19.330 13:44:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58866 00:05:19.330 13:44:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:19.589 13:44:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 58866 00:05:19.590 13:44:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58866 ']' 00:05:19.590 13:44:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58866 00:05:19.590 13:44:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:19.590 13:44:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:19.590 13:44:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58866 00:05:19.848 killing process with pid 58866 00:05:19.848 13:44:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:19.848 13:44:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:19.848 13:44:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58866' 00:05:19.848 13:44:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58866 00:05:19.848 13:44:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58866 00:05:20.107 00:05:20.107 real 0m2.323s 00:05:20.107 user 0m2.513s 00:05:20.107 sys 0m0.681s 00:05:20.107 13:44:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:20.107 13:44:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:20.107 ************************************ 00:05:20.107 END TEST locking_app_on_locked_coremask 00:05:20.107 ************************************ 00:05:20.366 13:44:19 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:20.366 13:44:19 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:20.366 13:44:19 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:20.366 13:44:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:20.366 ************************************ 00:05:20.366 START TEST locking_overlapped_coremask 00:05:20.366 ************************************ 00:05:20.366 13:44:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:20.366 13:44:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=58920 00:05:20.366 13:44:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 58920 /var/tmp/spdk.sock 00:05:20.366 13:44:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:20.366 13:44:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 58920 ']' 00:05:20.366 13:44:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.366 13:44:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:20.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.366 13:44:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.366 13:44:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:20.366 13:44:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:20.366 [2024-12-06 13:44:19.604883] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:05:20.366 [2024-12-06 13:44:19.604955] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58920 ] 00:05:20.366 [2024-12-06 13:44:19.741762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:20.626 [2024-12-06 13:44:19.791244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:20.626 [2024-12-06 13:44:19.791379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:20.626 [2024-12-06 13:44:19.791381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.626 [2024-12-06 13:44:19.880460] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:20.885 13:44:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:20.885 13:44:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:20.885 13:44:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=58932 00:05:20.885 13:44:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:20.885 13:44:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 58932 /var/tmp/spdk2.sock 00:05:20.885 13:44:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:20.885 13:44:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58932 /var/tmp/spdk2.sock 00:05:20.885 13:44:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:20.885 13:44:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:20.885 13:44:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:20.885 13:44:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:20.885 13:44:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 58932 /var/tmp/spdk2.sock 00:05:20.885 13:44:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 58932 ']' 00:05:20.885 13:44:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:20.885 13:44:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:20.885 13:44:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:20.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:20.885 13:44:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:20.885 13:44:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:20.885 [2024-12-06 13:44:20.196279] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:05:20.885 [2024-12-06 13:44:20.196563] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58932 ] 00:05:21.144 [2024-12-06 13:44:20.352529] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58920 has claimed it. 00:05:21.144 [2024-12-06 13:44:20.356147] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:21.713 ERROR: process (pid: 58932) is no longer running 00:05:21.713 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58932) - No such process 00:05:21.713 13:44:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:21.713 13:44:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:21.713 13:44:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:21.713 13:44:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:21.713 13:44:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:21.713 13:44:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:21.713 13:44:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:21.713 13:44:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:21.713 13:44:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:21.713 13:44:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:21.713 13:44:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 58920 00:05:21.713 13:44:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 58920 ']' 00:05:21.713 13:44:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 58920 00:05:21.713 13:44:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:21.713 13:44:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:21.713 13:44:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58920 00:05:21.713 13:44:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:21.713 13:44:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:21.714 13:44:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58920' 00:05:21.714 killing process with pid 58920 00:05:21.714 13:44:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 58920 00:05:21.714 13:44:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 58920 00:05:22.282 00:05:22.282 real 0m1.891s 00:05:22.282 user 0m5.164s 00:05:22.282 sys 0m0.467s 00:05:22.282 13:44:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:22.282 13:44:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:22.282 ************************************ 00:05:22.282 END TEST locking_overlapped_coremask 00:05:22.282 ************************************ 00:05:22.282 13:44:21 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:22.282 13:44:21 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:22.282 13:44:21 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:22.282 13:44:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:22.282 ************************************ 00:05:22.282 START TEST locking_overlapped_coremask_via_rpc 00:05:22.282 ************************************ 00:05:22.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.282 13:44:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:22.282 13:44:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=58976 00:05:22.282 13:44:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 58976 /var/tmp/spdk.sock 00:05:22.282 13:44:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58976 ']' 00:05:22.282 13:44:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:22.282 13:44:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.282 13:44:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:22.282 13:44:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.282 13:44:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:22.282 13:44:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.282 [2024-12-06 13:44:21.562794] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:05:22.282 [2024-12-06 13:44:21.562889] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58976 ] 00:05:22.542 [2024-12-06 13:44:21.701436] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:22.542 [2024-12-06 13:44:21.701486] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:22.542 [2024-12-06 13:44:21.750556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:22.542 [2024-12-06 13:44:21.750671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:22.542 [2024-12-06 13:44:21.750679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.542 [2024-12-06 13:44:21.837649] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:23.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:23.111 13:44:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:23.111 13:44:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:23.111 13:44:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:23.111 13:44:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=58994 00:05:23.111 13:44:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 58994 /var/tmp/spdk2.sock 00:05:23.111 13:44:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58994 ']' 00:05:23.111 13:44:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:23.111 13:44:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:23.111 13:44:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:23.111 13:44:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:23.111 13:44:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.371 [2024-12-06 13:44:22.543224] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:05:23.371 [2024-12-06 13:44:22.543470] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58994 ] 00:05:23.371 [2024-12-06 13:44:22.697742] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:23.371 [2024-12-06 13:44:22.697870] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:23.631 [2024-12-06 13:44:22.797983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:23.631 [2024-12-06 13:44:22.799276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:23.631 [2024-12-06 13:44:22.799277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:23.631 [2024-12-06 13:44:22.935532] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:24.200 13:44:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:24.200 13:44:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:24.200 13:44:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:24.200 13:44:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:24.200 13:44:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.200 13:44:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:24.200 13:44:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:24.200 13:44:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:24.200 13:44:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:24.200 13:44:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:24.200 13:44:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:24.200 13:44:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:24.200 13:44:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:24.200 13:44:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:24.200 13:44:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:24.200 13:44:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.200 [2024-12-06 13:44:23.577244] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58976 has claimed it. 00:05:24.200 request: 00:05:24.200 { 00:05:24.200 "method": "framework_enable_cpumask_locks", 00:05:24.200 "req_id": 1 00:05:24.200 } 00:05:24.200 Got JSON-RPC error response 00:05:24.200 response: 00:05:24.200 { 00:05:24.200 "code": -32603, 00:05:24.200 "message": "Failed to claim CPU core: 2" 00:05:24.200 } 00:05:24.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.200 13:44:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:24.200 13:44:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:24.200 13:44:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:24.200 13:44:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:24.200 13:44:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:24.200 13:44:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 58976 /var/tmp/spdk.sock 00:05:24.200 13:44:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58976 ']' 00:05:24.200 13:44:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.200 13:44:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:24.200 13:44:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.200 13:44:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:24.200 13:44:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.769 13:44:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:24.769 13:44:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:24.769 13:44:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 58994 /var/tmp/spdk2.sock 00:05:24.769 13:44:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58994 ']' 00:05:24.769 13:44:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:24.769 13:44:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:24.769 13:44:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:24.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:24.769 13:44:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:24.769 13:44:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.769 13:44:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:24.769 13:44:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:24.769 13:44:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:24.769 13:44:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:24.769 13:44:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:25.029 ************************************ 00:05:25.029 END TEST locking_overlapped_coremask_via_rpc 00:05:25.029 ************************************ 00:05:25.029 13:44:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:25.029 00:05:25.029 real 0m2.678s 00:05:25.029 user 0m1.397s 00:05:25.029 sys 0m0.207s 00:05:25.029 13:44:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:25.029 13:44:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.029 13:44:24 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:25.029 13:44:24 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 58976 ]] 00:05:25.029 13:44:24 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 58976 00:05:25.029 13:44:24 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 58976 ']' 00:05:25.029 13:44:24 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 58976 00:05:25.029 13:44:24 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:25.029 13:44:24 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:25.029 13:44:24 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58976 00:05:25.029 killing process with pid 58976 00:05:25.029 13:44:24 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:25.029 13:44:24 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:25.029 13:44:24 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58976' 00:05:25.029 13:44:24 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 58976 00:05:25.029 13:44:24 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 58976 00:05:25.598 13:44:24 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 58994 ]] 00:05:25.598 13:44:24 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 58994 00:05:25.598 13:44:24 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 58994 ']' 00:05:25.598 13:44:24 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 58994 00:05:25.598 13:44:24 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:25.598 13:44:24 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:25.598 13:44:24 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58994 00:05:25.598 killing process with pid 58994 00:05:25.598 13:44:24 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:25.598 13:44:24 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:25.598 13:44:24 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58994' 00:05:25.598 13:44:24 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 58994 00:05:25.598 13:44:24 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 58994 00:05:25.857 13:44:25 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:25.857 13:44:25 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:25.857 13:44:25 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 58976 ]] 00:05:25.857 13:44:25 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 58976 00:05:25.857 Process with pid 58976 is not found 00:05:25.857 Process with pid 58994 is not found 00:05:25.858 13:44:25 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 58976 ']' 00:05:25.858 13:44:25 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 58976 00:05:25.858 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (58976) - No such process 00:05:25.858 13:44:25 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 58976 is not found' 00:05:25.858 13:44:25 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 58994 ]] 00:05:25.858 13:44:25 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 58994 00:05:25.858 13:44:25 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 58994 ']' 00:05:25.858 13:44:25 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 58994 00:05:25.858 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (58994) - No such process 00:05:25.858 13:44:25 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 58994 is not found' 00:05:25.858 13:44:25 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:25.858 00:05:25.858 real 0m20.398s 00:05:25.858 user 0m35.392s 00:05:25.858 sys 0m5.878s 00:05:25.858 13:44:25 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:25.858 13:44:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:25.858 ************************************ 00:05:25.858 END TEST cpu_locks 00:05:25.858 ************************************ 00:05:25.858 00:05:25.858 real 0m47.398s 00:05:25.858 user 1m30.341s 00:05:25.858 sys 0m9.356s 00:05:25.858 13:44:25 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:25.858 13:44:25 event -- common/autotest_common.sh@10 -- # set +x 00:05:25.858 ************************************ 00:05:25.858 END TEST event 00:05:25.858 ************************************ 00:05:26.116 13:44:25 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:26.116 13:44:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:26.116 13:44:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.116 13:44:25 -- common/autotest_common.sh@10 -- # set +x 00:05:26.116 ************************************ 00:05:26.116 START TEST thread 00:05:26.116 ************************************ 00:05:26.116 13:44:25 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:26.116 * Looking for test storage... 00:05:26.116 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:26.116 13:44:25 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:26.116 13:44:25 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:05:26.116 13:44:25 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:26.116 13:44:25 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:26.116 13:44:25 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:26.116 13:44:25 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:26.116 13:44:25 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:26.116 13:44:25 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:26.116 13:44:25 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:26.116 13:44:25 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:26.116 13:44:25 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:26.116 13:44:25 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:26.116 13:44:25 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:26.116 13:44:25 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:26.116 13:44:25 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:26.116 13:44:25 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:26.116 13:44:25 thread -- scripts/common.sh@345 -- # : 1 00:05:26.116 13:44:25 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:26.116 13:44:25 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:26.116 13:44:25 thread -- scripts/common.sh@365 -- # decimal 1 00:05:26.116 13:44:25 thread -- scripts/common.sh@353 -- # local d=1 00:05:26.116 13:44:25 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:26.116 13:44:25 thread -- scripts/common.sh@355 -- # echo 1 00:05:26.116 13:44:25 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:26.116 13:44:25 thread -- scripts/common.sh@366 -- # decimal 2 00:05:26.116 13:44:25 thread -- scripts/common.sh@353 -- # local d=2 00:05:26.116 13:44:25 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:26.116 13:44:25 thread -- scripts/common.sh@355 -- # echo 2 00:05:26.116 13:44:25 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:26.116 13:44:25 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:26.116 13:44:25 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:26.116 13:44:25 thread -- scripts/common.sh@368 -- # return 0 00:05:26.116 13:44:25 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:26.116 13:44:25 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:26.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.116 --rc genhtml_branch_coverage=1 00:05:26.116 --rc genhtml_function_coverage=1 00:05:26.116 --rc genhtml_legend=1 00:05:26.116 --rc geninfo_all_blocks=1 00:05:26.116 --rc geninfo_unexecuted_blocks=1 00:05:26.116 00:05:26.116 ' 00:05:26.116 13:44:25 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:26.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.116 --rc genhtml_branch_coverage=1 00:05:26.116 --rc genhtml_function_coverage=1 00:05:26.116 --rc genhtml_legend=1 00:05:26.116 --rc geninfo_all_blocks=1 00:05:26.116 --rc geninfo_unexecuted_blocks=1 00:05:26.116 00:05:26.116 ' 00:05:26.116 13:44:25 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:26.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.116 --rc genhtml_branch_coverage=1 00:05:26.116 --rc genhtml_function_coverage=1 00:05:26.116 --rc genhtml_legend=1 00:05:26.116 --rc geninfo_all_blocks=1 00:05:26.116 --rc geninfo_unexecuted_blocks=1 00:05:26.116 00:05:26.116 ' 00:05:26.116 13:44:25 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:26.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.116 --rc genhtml_branch_coverage=1 00:05:26.116 --rc genhtml_function_coverage=1 00:05:26.116 --rc genhtml_legend=1 00:05:26.116 --rc geninfo_all_blocks=1 00:05:26.116 --rc geninfo_unexecuted_blocks=1 00:05:26.116 00:05:26.116 ' 00:05:26.116 13:44:25 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:26.116 13:44:25 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:26.116 13:44:25 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.116 13:44:25 thread -- common/autotest_common.sh@10 -- # set +x 00:05:26.116 ************************************ 00:05:26.116 START TEST thread_poller_perf 00:05:26.116 ************************************ 00:05:26.116 13:44:25 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:26.116 [2024-12-06 13:44:25.477246] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:05:26.116 [2024-12-06 13:44:25.477507] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59130 ] 00:05:26.374 [2024-12-06 13:44:25.617980] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.374 [2024-12-06 13:44:25.664403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.374 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:27.308 [2024-12-06T13:44:26.712Z] ====================================== 00:05:27.308 [2024-12-06T13:44:26.712Z] busy:2206875662 (cyc) 00:05:27.308 [2024-12-06T13:44:26.712Z] total_run_count: 406000 00:05:27.308 [2024-12-06T13:44:26.712Z] tsc_hz: 2200000000 (cyc) 00:05:27.308 [2024-12-06T13:44:26.712Z] ====================================== 00:05:27.308 [2024-12-06T13:44:26.712Z] poller_cost: 5435 (cyc), 2470 (nsec) 00:05:27.566 00:05:27.566 ************************************ 00:05:27.566 END TEST thread_poller_perf 00:05:27.566 ************************************ 00:05:27.566 real 0m1.256s 00:05:27.566 user 0m1.100s 00:05:27.566 sys 0m0.050s 00:05:27.566 13:44:26 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:27.566 13:44:26 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:27.566 13:44:26 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:27.566 13:44:26 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:27.566 13:44:26 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:27.566 13:44:26 thread -- common/autotest_common.sh@10 -- # set +x 00:05:27.566 ************************************ 00:05:27.566 START TEST thread_poller_perf 00:05:27.566 ************************************ 00:05:27.566 13:44:26 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:27.566 [2024-12-06 13:44:26.785757] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:05:27.566 [2024-12-06 13:44:26.785863] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59160 ] 00:05:27.566 [2024-12-06 13:44:26.928183] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.824 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:27.824 [2024-12-06 13:44:26.971100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.758 [2024-12-06T13:44:28.162Z] ====================================== 00:05:28.758 [2024-12-06T13:44:28.162Z] busy:2202137874 (cyc) 00:05:28.758 [2024-12-06T13:44:28.162Z] total_run_count: 4863000 00:05:28.758 [2024-12-06T13:44:28.162Z] tsc_hz: 2200000000 (cyc) 00:05:28.758 [2024-12-06T13:44:28.162Z] ====================================== 00:05:28.758 [2024-12-06T13:44:28.162Z] poller_cost: 452 (cyc), 205 (nsec) 00:05:28.758 00:05:28.758 real 0m1.256s 00:05:28.758 user 0m1.106s 00:05:28.758 sys 0m0.045s 00:05:28.758 ************************************ 00:05:28.758 END TEST thread_poller_perf 00:05:28.758 ************************************ 00:05:28.758 13:44:28 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:28.758 13:44:28 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:28.758 13:44:28 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:28.758 00:05:28.758 real 0m2.780s 00:05:28.758 user 0m2.331s 00:05:28.758 sys 0m0.234s 00:05:28.758 13:44:28 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:28.758 13:44:28 thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.758 ************************************ 00:05:28.758 END TEST thread 00:05:28.758 ************************************ 00:05:28.758 13:44:28 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:28.758 13:44:28 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:28.758 13:44:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:28.758 13:44:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.758 13:44:28 -- common/autotest_common.sh@10 -- # set +x 00:05:28.758 ************************************ 00:05:28.758 START TEST app_cmdline 00:05:28.758 ************************************ 00:05:28.758 13:44:28 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:29.017 * Looking for test storage... 00:05:29.017 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:29.017 13:44:28 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:29.017 13:44:28 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:05:29.017 13:44:28 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:29.017 13:44:28 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:29.017 13:44:28 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:29.017 13:44:28 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:29.017 13:44:28 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:29.017 13:44:28 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:29.017 13:44:28 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:29.017 13:44:28 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:29.017 13:44:28 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:29.017 13:44:28 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:29.017 13:44:28 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:29.017 13:44:28 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:29.017 13:44:28 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:29.017 13:44:28 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:29.017 13:44:28 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:29.017 13:44:28 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:29.017 13:44:28 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:29.017 13:44:28 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:29.017 13:44:28 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:29.017 13:44:28 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:29.017 13:44:28 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:29.017 13:44:28 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:29.017 13:44:28 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:29.017 13:44:28 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:29.017 13:44:28 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:29.017 13:44:28 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:29.017 13:44:28 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:29.017 13:44:28 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:29.017 13:44:28 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:29.017 13:44:28 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:29.017 13:44:28 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:29.017 13:44:28 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:29.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.017 --rc genhtml_branch_coverage=1 00:05:29.017 --rc genhtml_function_coverage=1 00:05:29.017 --rc genhtml_legend=1 00:05:29.017 --rc geninfo_all_blocks=1 00:05:29.017 --rc geninfo_unexecuted_blocks=1 00:05:29.017 00:05:29.017 ' 00:05:29.017 13:44:28 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:29.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.017 --rc genhtml_branch_coverage=1 00:05:29.017 --rc genhtml_function_coverage=1 00:05:29.017 --rc genhtml_legend=1 00:05:29.017 --rc geninfo_all_blocks=1 00:05:29.017 --rc geninfo_unexecuted_blocks=1 00:05:29.017 00:05:29.017 ' 00:05:29.017 13:44:28 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:29.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.017 --rc genhtml_branch_coverage=1 00:05:29.017 --rc genhtml_function_coverage=1 00:05:29.017 --rc genhtml_legend=1 00:05:29.017 --rc geninfo_all_blocks=1 00:05:29.017 --rc geninfo_unexecuted_blocks=1 00:05:29.017 00:05:29.017 ' 00:05:29.017 13:44:28 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:29.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.017 --rc genhtml_branch_coverage=1 00:05:29.017 --rc genhtml_function_coverage=1 00:05:29.017 --rc genhtml_legend=1 00:05:29.017 --rc geninfo_all_blocks=1 00:05:29.017 --rc geninfo_unexecuted_blocks=1 00:05:29.017 00:05:29.017 ' 00:05:29.017 13:44:28 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:29.017 13:44:28 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59243 00:05:29.017 13:44:28 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59243 00:05:29.017 13:44:28 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59243 ']' 00:05:29.017 13:44:28 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.017 13:44:28 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:29.017 13:44:28 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:29.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.017 13:44:28 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.017 13:44:28 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:29.017 13:44:28 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:29.017 [2024-12-06 13:44:28.347232] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:05:29.017 [2024-12-06 13:44:28.347337] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59243 ] 00:05:29.276 [2024-12-06 13:44:28.490714] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.276 [2024-12-06 13:44:28.544834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.276 [2024-12-06 13:44:28.632641] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:30.211 13:44:29 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:30.211 13:44:29 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:05:30.211 13:44:29 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:05:30.211 { 00:05:30.211 "version": "SPDK v25.01-pre git sha1 37ef4f42e", 00:05:30.211 "fields": { 00:05:30.211 "major": 25, 00:05:30.211 "minor": 1, 00:05:30.211 "patch": 0, 00:05:30.211 "suffix": "-pre", 00:05:30.211 "commit": "37ef4f42e" 00:05:30.211 } 00:05:30.211 } 00:05:30.211 13:44:29 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:30.211 13:44:29 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:30.211 13:44:29 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:30.211 13:44:29 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:30.211 13:44:29 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:30.211 13:44:29 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.211 13:44:29 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:30.211 13:44:29 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:30.211 13:44:29 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:30.211 13:44:29 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.211 13:44:29 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:30.211 13:44:29 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:30.211 13:44:29 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:30.211 13:44:29 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:05:30.211 13:44:29 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:30.211 13:44:29 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:30.211 13:44:29 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:30.211 13:44:29 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:30.211 13:44:29 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:30.211 13:44:29 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:30.211 13:44:29 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:30.211 13:44:29 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:30.211 13:44:29 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:05:30.211 13:44:29 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:30.469 request: 00:05:30.469 { 00:05:30.469 "method": "env_dpdk_get_mem_stats", 00:05:30.469 "req_id": 1 00:05:30.469 } 00:05:30.469 Got JSON-RPC error response 00:05:30.469 response: 00:05:30.469 { 00:05:30.469 "code": -32601, 00:05:30.469 "message": "Method not found" 00:05:30.469 } 00:05:30.469 13:44:29 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:05:30.469 13:44:29 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:30.469 13:44:29 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:30.469 13:44:29 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:30.469 13:44:29 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59243 00:05:30.469 13:44:29 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59243 ']' 00:05:30.469 13:44:29 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59243 00:05:30.469 13:44:29 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:05:30.469 13:44:29 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:30.469 13:44:29 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59243 00:05:30.469 13:44:29 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:30.469 killing process with pid 59243 00:05:30.469 13:44:29 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:30.469 13:44:29 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59243' 00:05:30.469 13:44:29 app_cmdline -- common/autotest_common.sh@973 -- # kill 59243 00:05:30.469 13:44:29 app_cmdline -- common/autotest_common.sh@978 -- # wait 59243 00:05:31.035 00:05:31.035 real 0m2.240s 00:05:31.035 user 0m2.634s 00:05:31.035 sys 0m0.540s 00:05:31.035 13:44:30 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:31.035 ************************************ 00:05:31.035 END TEST app_cmdline 00:05:31.035 ************************************ 00:05:31.035 13:44:30 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:31.035 13:44:30 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:31.035 13:44:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:31.035 13:44:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.035 13:44:30 -- common/autotest_common.sh@10 -- # set +x 00:05:31.035 ************************************ 00:05:31.035 START TEST version 00:05:31.035 ************************************ 00:05:31.035 13:44:30 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:31.293 * Looking for test storage... 00:05:31.293 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:31.293 13:44:30 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:31.293 13:44:30 version -- common/autotest_common.sh@1711 -- # lcov --version 00:05:31.293 13:44:30 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:31.293 13:44:30 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:31.293 13:44:30 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:31.294 13:44:30 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:31.294 13:44:30 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:31.294 13:44:30 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:31.294 13:44:30 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:31.294 13:44:30 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:31.294 13:44:30 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:31.294 13:44:30 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:31.294 13:44:30 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:31.294 13:44:30 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:31.294 13:44:30 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:31.294 13:44:30 version -- scripts/common.sh@344 -- # case "$op" in 00:05:31.294 13:44:30 version -- scripts/common.sh@345 -- # : 1 00:05:31.294 13:44:30 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:31.294 13:44:30 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:31.294 13:44:30 version -- scripts/common.sh@365 -- # decimal 1 00:05:31.294 13:44:30 version -- scripts/common.sh@353 -- # local d=1 00:05:31.294 13:44:30 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:31.294 13:44:30 version -- scripts/common.sh@355 -- # echo 1 00:05:31.294 13:44:30 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:31.294 13:44:30 version -- scripts/common.sh@366 -- # decimal 2 00:05:31.294 13:44:30 version -- scripts/common.sh@353 -- # local d=2 00:05:31.294 13:44:30 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:31.294 13:44:30 version -- scripts/common.sh@355 -- # echo 2 00:05:31.294 13:44:30 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:31.294 13:44:30 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:31.294 13:44:30 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:31.294 13:44:30 version -- scripts/common.sh@368 -- # return 0 00:05:31.294 13:44:30 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:31.294 13:44:30 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:31.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.294 --rc genhtml_branch_coverage=1 00:05:31.294 --rc genhtml_function_coverage=1 00:05:31.294 --rc genhtml_legend=1 00:05:31.294 --rc geninfo_all_blocks=1 00:05:31.294 --rc geninfo_unexecuted_blocks=1 00:05:31.294 00:05:31.294 ' 00:05:31.294 13:44:30 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:31.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.294 --rc genhtml_branch_coverage=1 00:05:31.294 --rc genhtml_function_coverage=1 00:05:31.294 --rc genhtml_legend=1 00:05:31.294 --rc geninfo_all_blocks=1 00:05:31.294 --rc geninfo_unexecuted_blocks=1 00:05:31.294 00:05:31.294 ' 00:05:31.294 13:44:30 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:31.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.294 --rc genhtml_branch_coverage=1 00:05:31.294 --rc genhtml_function_coverage=1 00:05:31.294 --rc genhtml_legend=1 00:05:31.294 --rc geninfo_all_blocks=1 00:05:31.294 --rc geninfo_unexecuted_blocks=1 00:05:31.294 00:05:31.294 ' 00:05:31.294 13:44:30 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:31.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.294 --rc genhtml_branch_coverage=1 00:05:31.294 --rc genhtml_function_coverage=1 00:05:31.294 --rc genhtml_legend=1 00:05:31.294 --rc geninfo_all_blocks=1 00:05:31.294 --rc geninfo_unexecuted_blocks=1 00:05:31.294 00:05:31.294 ' 00:05:31.294 13:44:30 version -- app/version.sh@17 -- # get_header_version major 00:05:31.294 13:44:30 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:31.294 13:44:30 version -- app/version.sh@14 -- # cut -f2 00:05:31.294 13:44:30 version -- app/version.sh@14 -- # tr -d '"' 00:05:31.294 13:44:30 version -- app/version.sh@17 -- # major=25 00:05:31.294 13:44:30 version -- app/version.sh@18 -- # get_header_version minor 00:05:31.294 13:44:30 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:31.294 13:44:30 version -- app/version.sh@14 -- # tr -d '"' 00:05:31.294 13:44:30 version -- app/version.sh@14 -- # cut -f2 00:05:31.294 13:44:30 version -- app/version.sh@18 -- # minor=1 00:05:31.294 13:44:30 version -- app/version.sh@19 -- # get_header_version patch 00:05:31.294 13:44:30 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:31.294 13:44:30 version -- app/version.sh@14 -- # cut -f2 00:05:31.294 13:44:30 version -- app/version.sh@14 -- # tr -d '"' 00:05:31.294 13:44:30 version -- app/version.sh@19 -- # patch=0 00:05:31.294 13:44:30 version -- app/version.sh@20 -- # get_header_version suffix 00:05:31.294 13:44:30 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:31.294 13:44:30 version -- app/version.sh@14 -- # cut -f2 00:05:31.294 13:44:30 version -- app/version.sh@14 -- # tr -d '"' 00:05:31.294 13:44:30 version -- app/version.sh@20 -- # suffix=-pre 00:05:31.294 13:44:30 version -- app/version.sh@22 -- # version=25.1 00:05:31.294 13:44:30 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:31.294 13:44:30 version -- app/version.sh@28 -- # version=25.1rc0 00:05:31.294 13:44:30 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:31.294 13:44:30 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:31.294 13:44:30 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:31.294 13:44:30 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:31.294 00:05:31.294 real 0m0.261s 00:05:31.294 user 0m0.170s 00:05:31.294 sys 0m0.132s 00:05:31.294 ************************************ 00:05:31.294 END TEST version 00:05:31.294 ************************************ 00:05:31.294 13:44:30 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:31.294 13:44:30 version -- common/autotest_common.sh@10 -- # set +x 00:05:31.553 13:44:30 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:31.553 13:44:30 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:31.553 13:44:30 -- spdk/autotest.sh@194 -- # uname -s 00:05:31.553 13:44:30 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:31.553 13:44:30 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:31.553 13:44:30 -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]] 00:05:31.553 13:44:30 -- spdk/autotest.sh@201 -- # [[ 0 -eq 0 ]] 00:05:31.553 13:44:30 -- spdk/autotest.sh@202 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:05:31.553 13:44:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:31.553 13:44:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.553 13:44:30 -- common/autotest_common.sh@10 -- # set +x 00:05:31.553 ************************************ 00:05:31.553 START TEST spdk_dd 00:05:31.553 ************************************ 00:05:31.553 13:44:30 spdk_dd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:05:31.553 * Looking for test storage... 00:05:31.553 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:05:31.553 13:44:30 spdk_dd -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:31.553 13:44:30 spdk_dd -- common/autotest_common.sh@1711 -- # lcov --version 00:05:31.553 13:44:30 spdk_dd -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:31.553 13:44:30 spdk_dd -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:31.553 13:44:30 spdk_dd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:31.553 13:44:30 spdk_dd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:31.553 13:44:30 spdk_dd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:31.553 13:44:30 spdk_dd -- scripts/common.sh@336 -- # IFS=.-: 00:05:31.553 13:44:30 spdk_dd -- scripts/common.sh@336 -- # read -ra ver1 00:05:31.553 13:44:30 spdk_dd -- scripts/common.sh@337 -- # IFS=.-: 00:05:31.553 13:44:30 spdk_dd -- scripts/common.sh@337 -- # read -ra ver2 00:05:31.553 13:44:30 spdk_dd -- scripts/common.sh@338 -- # local 'op=<' 00:05:31.553 13:44:30 spdk_dd -- scripts/common.sh@340 -- # ver1_l=2 00:05:31.553 13:44:30 spdk_dd -- scripts/common.sh@341 -- # ver2_l=1 00:05:31.553 13:44:30 spdk_dd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:31.553 13:44:30 spdk_dd -- scripts/common.sh@344 -- # case "$op" in 00:05:31.553 13:44:30 spdk_dd -- scripts/common.sh@345 -- # : 1 00:05:31.553 13:44:30 spdk_dd -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:31.553 13:44:30 spdk_dd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:31.553 13:44:30 spdk_dd -- scripts/common.sh@365 -- # decimal 1 00:05:31.553 13:44:30 spdk_dd -- scripts/common.sh@353 -- # local d=1 00:05:31.553 13:44:30 spdk_dd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:31.553 13:44:30 spdk_dd -- scripts/common.sh@355 -- # echo 1 00:05:31.553 13:44:30 spdk_dd -- scripts/common.sh@365 -- # ver1[v]=1 00:05:31.553 13:44:30 spdk_dd -- scripts/common.sh@366 -- # decimal 2 00:05:31.553 13:44:30 spdk_dd -- scripts/common.sh@353 -- # local d=2 00:05:31.553 13:44:30 spdk_dd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:31.553 13:44:30 spdk_dd -- scripts/common.sh@355 -- # echo 2 00:05:31.553 13:44:30 spdk_dd -- scripts/common.sh@366 -- # ver2[v]=2 00:05:31.553 13:44:30 spdk_dd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:31.553 13:44:30 spdk_dd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:31.553 13:44:30 spdk_dd -- scripts/common.sh@368 -- # return 0 00:05:31.553 13:44:30 spdk_dd -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:31.553 13:44:30 spdk_dd -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:31.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.553 --rc genhtml_branch_coverage=1 00:05:31.553 --rc genhtml_function_coverage=1 00:05:31.553 --rc genhtml_legend=1 00:05:31.553 --rc geninfo_all_blocks=1 00:05:31.553 --rc geninfo_unexecuted_blocks=1 00:05:31.553 00:05:31.553 ' 00:05:31.553 13:44:30 spdk_dd -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:31.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.553 --rc genhtml_branch_coverage=1 00:05:31.553 --rc genhtml_function_coverage=1 00:05:31.553 --rc genhtml_legend=1 00:05:31.553 --rc geninfo_all_blocks=1 00:05:31.553 --rc geninfo_unexecuted_blocks=1 00:05:31.553 00:05:31.553 ' 00:05:31.553 13:44:30 spdk_dd -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:31.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.553 --rc genhtml_branch_coverage=1 00:05:31.553 --rc genhtml_function_coverage=1 00:05:31.553 --rc genhtml_legend=1 00:05:31.553 --rc geninfo_all_blocks=1 00:05:31.553 --rc geninfo_unexecuted_blocks=1 00:05:31.553 00:05:31.553 ' 00:05:31.553 13:44:30 spdk_dd -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:31.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.553 --rc genhtml_branch_coverage=1 00:05:31.553 --rc genhtml_function_coverage=1 00:05:31.553 --rc genhtml_legend=1 00:05:31.553 --rc geninfo_all_blocks=1 00:05:31.553 --rc geninfo_unexecuted_blocks=1 00:05:31.553 00:05:31.553 ' 00:05:31.553 13:44:30 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:31.553 13:44:30 spdk_dd -- scripts/common.sh@15 -- # shopt -s extglob 00:05:31.553 13:44:30 spdk_dd -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:31.553 13:44:30 spdk_dd -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:31.553 13:44:30 spdk_dd -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:31.553 13:44:30 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.553 13:44:30 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.553 13:44:30 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.553 13:44:30 spdk_dd -- paths/export.sh@5 -- # export PATH 00:05:31.553 13:44:30 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.553 13:44:30 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:32.120 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:32.120 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:32.120 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:32.120 13:44:31 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:05:32.120 13:44:31 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:05:32.120 13:44:31 spdk_dd -- scripts/common.sh@312 -- # local bdf bdfs 00:05:32.120 13:44:31 spdk_dd -- scripts/common.sh@313 -- # local nvmes 00:05:32.120 13:44:31 spdk_dd -- scripts/common.sh@315 -- # [[ -n '' ]] 00:05:32.120 13:44:31 spdk_dd -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:05:32.120 13:44:31 spdk_dd -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:05:32.120 13:44:31 spdk_dd -- scripts/common.sh@298 -- # local bdf= 00:05:32.120 13:44:31 spdk_dd -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:05:32.120 13:44:31 spdk_dd -- scripts/common.sh@233 -- # local class 00:05:32.120 13:44:31 spdk_dd -- scripts/common.sh@234 -- # local subclass 00:05:32.120 13:44:31 spdk_dd -- scripts/common.sh@235 -- # local progif 00:05:32.120 13:44:31 spdk_dd -- scripts/common.sh@236 -- # printf %02x 1 00:05:32.120 13:44:31 spdk_dd -- scripts/common.sh@236 -- # class=01 00:05:32.121 13:44:31 spdk_dd -- scripts/common.sh@237 -- # printf %02x 8 00:05:32.121 13:44:31 spdk_dd -- scripts/common.sh@237 -- # subclass=08 00:05:32.121 13:44:31 spdk_dd -- scripts/common.sh@238 -- # printf %02x 2 00:05:32.121 13:44:31 spdk_dd -- scripts/common.sh@238 -- # progif=02 00:05:32.121 13:44:31 spdk_dd -- scripts/common.sh@240 -- # hash lspci 00:05:32.121 13:44:31 spdk_dd -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:05:32.121 13:44:31 spdk_dd -- scripts/common.sh@242 -- # lspci -mm -n -D 00:05:32.121 13:44:31 spdk_dd -- scripts/common.sh@243 -- # grep -i -- -p02 00:05:32.121 13:44:31 spdk_dd -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:05:32.121 13:44:31 spdk_dd -- scripts/common.sh@245 -- # tr -d '"' 00:05:32.121 13:44:31 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:05:32.121 13:44:31 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:05:32.121 13:44:31 spdk_dd -- scripts/common.sh@18 -- # local i 00:05:32.121 13:44:31 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:05:32.121 13:44:31 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:05:32.121 13:44:31 spdk_dd -- scripts/common.sh@27 -- # return 0 00:05:32.121 13:44:31 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:05:32.121 13:44:31 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:05:32.121 13:44:31 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:05:32.121 13:44:31 spdk_dd -- scripts/common.sh@18 -- # local i 00:05:32.121 13:44:31 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:05:32.121 13:44:31 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:05:32.121 13:44:31 spdk_dd -- scripts/common.sh@27 -- # return 0 00:05:32.121 13:44:31 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:05:32.121 13:44:31 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:05:32.121 13:44:31 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:05:32.121 13:44:31 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:05:32.121 13:44:31 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:05:32.121 13:44:31 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:05:32.121 13:44:31 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:05:32.121 13:44:31 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:05:32.121 13:44:31 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:05:32.121 13:44:31 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:05:32.121 13:44:31 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:05:32.121 13:44:31 spdk_dd -- scripts/common.sh@328 -- # (( 2 )) 00:05:32.121 13:44:31 spdk_dd -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:32.121 13:44:31 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@139 -- # local lib 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.1 == liburing.so.* ]] 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.11.0 == liburing.so.* ]] 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.12.0 == liburing.so.* ]] 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.11.0 == liburing.so.* ]] 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.12.0 == liburing.so.* ]] 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.15.0 == liburing.so.* ]] 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.7.0 == liburing.so.* ]] 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.1 == liburing.so.* ]] 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.1 == liburing.so.* ]] 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.2.0 == liburing.so.* ]] 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev_aio.so.1.0 == liburing.so.* ]] 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev.so.2.0 == liburing.so.* ]] 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.17.0 == liburing.so.* ]] 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.16.0 == liburing.so.* ]] 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:32.121 13:44:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.5.0 == liburing.so.* ]] 00:05:32.122 13:44:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:32.122 13:44:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:05:32.122 13:44:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:32.122 13:44:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:05:32.122 13:44:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:32.122 13:44:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:05:32.122 13:44:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:32.122 13:44:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:05:32.122 13:44:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:32.122 13:44:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:05:32.122 13:44:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:32.122 13:44:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:05:32.122 13:44:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:32.122 13:44:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.6.0 == liburing.so.* ]] 00:05:32.122 13:44:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:32.122 13:44:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.11.0 == liburing.so.* ]] 00:05:32.122 13:44:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:32.122 13:44:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.11.0 == liburing.so.* ]] 00:05:32.122 13:44:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:32.122 13:44:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.2.0 == liburing.so.* ]] 00:05:32.122 13:44:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:32.122 13:44:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:05:32.122 13:44:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:32.122 13:44:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:05:32.122 13:44:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:32.122 13:44:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:05:32.122 13:44:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:32.122 13:44:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.10.1 == liburing.so.* ]] 00:05:32.122 13:44:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:32.122 13:44:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.1 == liburing.so.* ]] 00:05:32.122 13:44:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:32.122 13:44:31 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:05:32.122 13:44:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:32.122 13:44:31 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:05:32.122 13:44:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:32.122 13:44:31 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:05:32.122 13:44:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:32.122 13:44:31 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:05:32.122 13:44:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:32.122 13:44:31 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:05:32.122 13:44:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:32.122 13:44:31 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:05:32.122 13:44:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:32.122 13:44:31 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:05:32.122 13:44:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:32.122 13:44:31 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:05:32.122 13:44:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:32.122 13:44:31 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:05:32.122 13:44:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:32.122 13:44:31 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:05:32.122 13:44:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:32.122 13:44:31 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:05:32.122 13:44:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:32.122 13:44:31 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:05:32.122 13:44:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:32.122 13:44:31 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:05:32.122 13:44:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:32.122 13:44:31 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:05:32.122 13:44:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:32.122 13:44:31 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:05:32.122 13:44:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:32.122 13:44:31 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:05:32.122 13:44:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:32.122 13:44:31 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:05:32.122 13:44:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:32.122 13:44:31 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:05:32.122 13:44:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:32.122 13:44:31 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:05:32.122 13:44:31 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:05:32.122 * spdk_dd linked to liburing 00:05:32.122 13:44:31 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:05:32.122 13:44:31 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:05:32.122 13:44:31 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:05:32.122 13:44:31 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:05:32.122 13:44:31 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:05:32.122 13:44:31 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:05:32.122 13:44:31 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:05:32.122 13:44:31 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:05:32.122 13:44:31 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:05:32.122 13:44:31 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:05:32.122 13:44:31 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:05:32.122 13:44:31 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:05:32.122 13:44:31 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:05:32.122 13:44:31 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:05:32.122 13:44:31 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:05:32.122 13:44:31 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:05:32.122 13:44:31 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:05:32.122 13:44:31 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:05:32.122 13:44:31 spdk_dd -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:05:32.122 13:44:31 spdk_dd -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:05:32.122 13:44:31 spdk_dd -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:05:32.122 13:44:31 spdk_dd -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:05:32.122 13:44:31 spdk_dd -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:05:32.122 13:44:31 spdk_dd -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:05:32.122 13:44:31 spdk_dd -- common/build_config.sh@23 -- # CONFIG_CET=n 00:05:32.122 13:44:31 spdk_dd -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:05:32.122 13:44:31 spdk_dd -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:05:32.122 13:44:31 spdk_dd -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:05:32.122 13:44:31 spdk_dd -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:05:32.122 13:44:31 spdk_dd -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:05:32.122 13:44:31 spdk_dd -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:05:32.122 13:44:31 spdk_dd -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:05:32.122 13:44:31 spdk_dd -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:05:32.122 13:44:31 spdk_dd -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:05:32.122 13:44:31 spdk_dd -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:05:32.122 13:44:31 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:05:32.122 13:44:31 spdk_dd -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:05:32.122 13:44:31 spdk_dd -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:05:32.122 13:44:31 spdk_dd -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:05:32.122 13:44:31 spdk_dd -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:05:32.122 13:44:31 spdk_dd -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:05:32.122 13:44:31 spdk_dd -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:05:32.122 13:44:31 spdk_dd -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:05:32.122 13:44:31 spdk_dd -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:05:32.122 13:44:31 spdk_dd -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:05:32.122 13:44:31 spdk_dd -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:05:32.122 13:44:31 spdk_dd -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:05:32.122 13:44:31 spdk_dd -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:05:32.122 13:44:31 spdk_dd -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:05:32.122 13:44:31 spdk_dd -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:05:32.122 13:44:31 spdk_dd -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:05:32.122 13:44:31 spdk_dd -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:05:32.122 13:44:31 spdk_dd -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:05:32.122 13:44:31 spdk_dd -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:05:32.122 13:44:31 spdk_dd -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:05:32.122 13:44:31 spdk_dd -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:05:32.122 13:44:31 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:05:32.122 13:44:31 spdk_dd -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:05:32.122 13:44:31 spdk_dd -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:05:32.122 13:44:31 spdk_dd -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:05:32.122 13:44:31 spdk_dd -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:05:32.122 13:44:31 spdk_dd -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=y 00:05:32.122 13:44:31 spdk_dd -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:05:32.122 13:44:31 spdk_dd -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:05:32.122 13:44:31 spdk_dd -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:05:32.122 13:44:31 spdk_dd -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:05:32.122 13:44:31 spdk_dd -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:05:32.122 13:44:31 spdk_dd -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:05:32.122 13:44:31 spdk_dd -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:05:32.122 13:44:31 spdk_dd -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:05:32.122 13:44:31 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:05:32.122 13:44:31 spdk_dd -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:05:32.122 13:44:31 spdk_dd -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:05:32.123 13:44:31 spdk_dd -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:05:32.123 13:44:31 spdk_dd -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:05:32.123 13:44:31 spdk_dd -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:05:32.123 13:44:31 spdk_dd -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:05:32.123 13:44:31 spdk_dd -- common/build_config.sh@76 -- # CONFIG_FC=n 00:05:32.123 13:44:31 spdk_dd -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:05:32.123 13:44:31 spdk_dd -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:05:32.123 13:44:31 spdk_dd -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:05:32.123 13:44:31 spdk_dd -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:05:32.123 13:44:31 spdk_dd -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:05:32.123 13:44:31 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:05:32.123 13:44:31 spdk_dd -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:05:32.123 13:44:31 spdk_dd -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:05:32.123 13:44:31 spdk_dd -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:05:32.123 13:44:31 spdk_dd -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:05:32.123 13:44:31 spdk_dd -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:05:32.123 13:44:31 spdk_dd -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:05:32.123 13:44:31 spdk_dd -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:05:32.123 13:44:31 spdk_dd -- common/build_config.sh@90 -- # CONFIG_URING=y 00:05:32.123 13:44:31 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:05:32.123 13:44:31 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 00:05:32.123 13:44:31 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 00:05:32.123 13:44:31 spdk_dd -- dd/common.sh@153 -- # return 0 00:05:32.123 13:44:31 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:05:32.123 13:44:31 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:05:32.123 13:44:31 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:32.123 13:44:31 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:32.123 13:44:31 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:05:32.123 ************************************ 00:05:32.123 START TEST spdk_dd_basic_rw 00:05:32.123 ************************************ 00:05:32.123 13:44:31 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:05:32.123 * Looking for test storage... 00:05:32.123 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:05:32.123 13:44:31 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:32.123 13:44:31 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:32.123 13:44:31 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1711 -- # lcov --version 00:05:32.381 13:44:31 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:32.381 13:44:31 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:32.381 13:44:31 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:32.381 13:44:31 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:32.381 13:44:31 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # IFS=.-: 00:05:32.381 13:44:31 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # read -ra ver1 00:05:32.381 13:44:31 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # IFS=.-: 00:05:32.381 13:44:31 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # read -ra ver2 00:05:32.381 13:44:31 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@338 -- # local 'op=<' 00:05:32.381 13:44:31 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@340 -- # ver1_l=2 00:05:32.381 13:44:31 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@341 -- # ver2_l=1 00:05:32.381 13:44:31 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:32.381 13:44:31 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@344 -- # case "$op" in 00:05:32.381 13:44:31 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@345 -- # : 1 00:05:32.381 13:44:31 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:32.381 13:44:31 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:32.381 13:44:31 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # decimal 1 00:05:32.381 13:44:31 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=1 00:05:32.381 13:44:31 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:32.381 13:44:31 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 1 00:05:32.381 13:44:31 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # ver1[v]=1 00:05:32.381 13:44:31 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # decimal 2 00:05:32.381 13:44:31 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=2 00:05:32.381 13:44:31 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:32.382 13:44:31 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 2 00:05:32.382 13:44:31 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # ver2[v]=2 00:05:32.382 13:44:31 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:32.382 13:44:31 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:32.382 13:44:31 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # return 0 00:05:32.382 13:44:31 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:32.382 13:44:31 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:32.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.382 --rc genhtml_branch_coverage=1 00:05:32.382 --rc genhtml_function_coverage=1 00:05:32.382 --rc genhtml_legend=1 00:05:32.382 --rc geninfo_all_blocks=1 00:05:32.382 --rc geninfo_unexecuted_blocks=1 00:05:32.382 00:05:32.382 ' 00:05:32.382 13:44:31 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:32.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.382 --rc genhtml_branch_coverage=1 00:05:32.382 --rc genhtml_function_coverage=1 00:05:32.382 --rc genhtml_legend=1 00:05:32.382 --rc geninfo_all_blocks=1 00:05:32.382 --rc geninfo_unexecuted_blocks=1 00:05:32.382 00:05:32.382 ' 00:05:32.382 13:44:31 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:32.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.382 --rc genhtml_branch_coverage=1 00:05:32.382 --rc genhtml_function_coverage=1 00:05:32.382 --rc genhtml_legend=1 00:05:32.382 --rc geninfo_all_blocks=1 00:05:32.382 --rc geninfo_unexecuted_blocks=1 00:05:32.382 00:05:32.382 ' 00:05:32.382 13:44:31 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:32.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.382 --rc genhtml_branch_coverage=1 00:05:32.382 --rc genhtml_function_coverage=1 00:05:32.382 --rc genhtml_legend=1 00:05:32.382 --rc geninfo_all_blocks=1 00:05:32.382 --rc geninfo_unexecuted_blocks=1 00:05:32.382 00:05:32.382 ' 00:05:32.382 13:44:31 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:32.382 13:44:31 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@15 -- # shopt -s extglob 00:05:32.382 13:44:31 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:32.382 13:44:31 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:32.382 13:44:31 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:32.382 13:44:31 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:32.382 13:44:31 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:32.382 13:44:31 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:32.382 13:44:31 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:05:32.382 13:44:31 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:32.382 13:44:31 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:05:32.382 13:44:31 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:05:32.382 13:44:31 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:05:32.382 13:44:31 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:05:32.382 13:44:31 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:05:32.382 13:44:31 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:05:32.382 13:44:31 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:05:32.382 13:44:31 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:32.382 13:44:31 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:32.382 13:44:31 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:05:32.382 13:44:31 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:05:32.382 13:44:31 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:05:32.382 13:44:31 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:05:32.643 13:44:31 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:05:32.643 13:44:31 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:05:32.644 13:44:31 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:05:32.644 13:44:31 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:05:32.644 13:44:31 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:05:32.644 13:44:31 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:05:32.644 13:44:31 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:05:32.644 13:44:31 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:32.644 13:44:31 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:32.644 13:44:31 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:32.644 13:44:31 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:05:32.644 13:44:31 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:32.644 13:44:31 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:32.644 13:44:31 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:32.644 ************************************ 00:05:32.644 START TEST dd_bs_lt_native_bs 00:05:32.644 ************************************ 00:05:32.644 13:44:31 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1129 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:32.644 13:44:31 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # local es=0 00:05:32.644 13:44:31 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:32.644 13:44:31 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:32.644 13:44:31 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:32.644 13:44:31 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:32.644 13:44:31 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:32.644 13:44:31 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:32.644 13:44:31 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:32.644 13:44:31 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:32.644 13:44:31 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:05:32.644 13:44:31 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:32.644 { 00:05:32.644 "subsystems": [ 00:05:32.644 { 00:05:32.644 "subsystem": "bdev", 00:05:32.644 "config": [ 00:05:32.644 { 00:05:32.644 "params": { 00:05:32.644 "trtype": "pcie", 00:05:32.644 "traddr": "0000:00:10.0", 00:05:32.644 "name": "Nvme0" 00:05:32.644 }, 00:05:32.644 "method": "bdev_nvme_attach_controller" 00:05:32.644 }, 00:05:32.644 { 00:05:32.644 "method": "bdev_wait_for_examine" 00:05:32.644 } 00:05:32.644 ] 00:05:32.644 } 00:05:32.644 ] 00:05:32.644 } 00:05:32.644 [2024-12-06 13:44:31.888003] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:05:32.644 [2024-12-06 13:44:31.888116] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59594 ] 00:05:32.644 [2024-12-06 13:44:32.039920] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.902 [2024-12-06 13:44:32.101864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.902 [2024-12-06 13:44:32.179064] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:33.161 [2024-12-06 13:44:32.306999] spdk_dd.c:1159:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:05:33.161 [2024-12-06 13:44:32.307083] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:33.161 [2024-12-06 13:44:32.481510] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:05:33.161 13:44:32 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # es=234 00:05:33.161 13:44:32 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:33.161 13:44:32 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@664 -- # es=106 00:05:33.161 13:44:32 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@665 -- # case "$es" in 00:05:33.161 13:44:32 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@672 -- # es=1 00:05:33.161 13:44:32 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:33.161 00:05:33.161 real 0m0.716s 00:05:33.161 user 0m0.472s 00:05:33.161 sys 0m0.198s 00:05:33.161 13:44:32 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:33.161 13:44:32 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:05:33.161 ************************************ 00:05:33.161 END TEST dd_bs_lt_native_bs 00:05:33.161 ************************************ 00:05:33.421 13:44:32 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:05:33.421 13:44:32 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:33.421 13:44:32 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:33.421 13:44:32 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:33.421 ************************************ 00:05:33.421 START TEST dd_rw 00:05:33.421 ************************************ 00:05:33.421 13:44:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1129 -- # basic_rw 4096 00:05:33.421 13:44:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:05:33.421 13:44:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:05:33.421 13:44:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:05:33.421 13:44:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:05:33.421 13:44:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:05:33.421 13:44:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:05:33.421 13:44:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:05:33.421 13:44:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:05:33.421 13:44:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:05:33.421 13:44:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:05:33.421 13:44:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:05:33.421 13:44:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:33.421 13:44:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:05:33.421 13:44:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:05:33.421 13:44:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:05:33.421 13:44:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:05:33.421 13:44:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:33.421 13:44:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:33.680 13:44:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:05:33.680 13:44:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:33.680 13:44:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:33.680 13:44:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:33.940 [2024-12-06 13:44:33.123323] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:05:33.940 [2024-12-06 13:44:33.123417] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59625 ] 00:05:33.940 { 00:05:33.940 "subsystems": [ 00:05:33.940 { 00:05:33.941 "subsystem": "bdev", 00:05:33.941 "config": [ 00:05:33.941 { 00:05:33.941 "params": { 00:05:33.941 "trtype": "pcie", 00:05:33.941 "traddr": "0000:00:10.0", 00:05:33.941 "name": "Nvme0" 00:05:33.941 }, 00:05:33.941 "method": "bdev_nvme_attach_controller" 00:05:33.941 }, 00:05:33.941 { 00:05:33.941 "method": "bdev_wait_for_examine" 00:05:33.941 } 00:05:33.941 ] 00:05:33.941 } 00:05:33.941 ] 00:05:33.941 } 00:05:33.941 [2024-12-06 13:44:33.267568] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.941 [2024-12-06 13:44:33.316448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.200 [2024-12-06 13:44:33.387273] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:34.200  [2024-12-06T13:44:33.864Z] Copying: 60/60 [kB] (average 19 MBps) 00:05:34.460 00:05:34.460 13:44:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:05:34.460 13:44:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:34.460 13:44:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:34.460 13:44:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:34.460 { 00:05:34.460 "subsystems": [ 00:05:34.460 { 00:05:34.460 "subsystem": "bdev", 00:05:34.460 "config": [ 00:05:34.460 { 00:05:34.460 "params": { 00:05:34.460 "trtype": "pcie", 00:05:34.460 "traddr": "0000:00:10.0", 00:05:34.460 "name": "Nvme0" 00:05:34.460 }, 00:05:34.460 "method": "bdev_nvme_attach_controller" 00:05:34.460 }, 00:05:34.460 { 00:05:34.460 "method": "bdev_wait_for_examine" 00:05:34.460 } 00:05:34.460 ] 00:05:34.460 } 00:05:34.460 ] 00:05:34.460 } 00:05:34.460 [2024-12-06 13:44:33.810191] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:05:34.460 [2024-12-06 13:44:33.810315] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59644 ] 00:05:34.719 [2024-12-06 13:44:33.953579] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.719 [2024-12-06 13:44:33.996728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.719 [2024-12-06 13:44:34.066549] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:34.998  [2024-12-06T13:44:34.674Z] Copying: 60/60 [kB] (average 19 MBps) 00:05:35.270 00:05:35.270 13:44:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:35.270 13:44:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:05:35.270 13:44:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:35.270 13:44:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:35.270 13:44:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:05:35.270 13:44:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:35.270 13:44:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:35.270 13:44:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:35.270 13:44:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:35.270 13:44:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:35.270 13:44:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:35.270 [2024-12-06 13:44:34.478643] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:05:35.270 [2024-12-06 13:44:34.478751] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59660 ] 00:05:35.270 { 00:05:35.270 "subsystems": [ 00:05:35.270 { 00:05:35.270 "subsystem": "bdev", 00:05:35.270 "config": [ 00:05:35.270 { 00:05:35.270 "params": { 00:05:35.270 "trtype": "pcie", 00:05:35.270 "traddr": "0000:00:10.0", 00:05:35.270 "name": "Nvme0" 00:05:35.270 }, 00:05:35.270 "method": "bdev_nvme_attach_controller" 00:05:35.270 }, 00:05:35.270 { 00:05:35.270 "method": "bdev_wait_for_examine" 00:05:35.270 } 00:05:35.270 ] 00:05:35.270 } 00:05:35.270 ] 00:05:35.270 } 00:05:35.270 [2024-12-06 13:44:34.616763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.270 [2024-12-06 13:44:34.659529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.529 [2024-12-06 13:44:34.730561] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:35.529  [2024-12-06T13:44:35.193Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:05:35.789 00:05:35.789 13:44:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:35.789 13:44:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:05:35.789 13:44:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:05:35.789 13:44:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:05:35.789 13:44:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:05:35.789 13:44:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:35.789 13:44:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:36.355 13:44:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:05:36.355 13:44:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:36.355 13:44:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:36.355 13:44:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:36.355 [2024-12-06 13:44:35.603679] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:05:36.355 [2024-12-06 13:44:35.603798] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59679 ] 00:05:36.355 { 00:05:36.355 "subsystems": [ 00:05:36.355 { 00:05:36.355 "subsystem": "bdev", 00:05:36.355 "config": [ 00:05:36.355 { 00:05:36.355 "params": { 00:05:36.355 "trtype": "pcie", 00:05:36.355 "traddr": "0000:00:10.0", 00:05:36.355 "name": "Nvme0" 00:05:36.355 }, 00:05:36.355 "method": "bdev_nvme_attach_controller" 00:05:36.355 }, 00:05:36.355 { 00:05:36.355 "method": "bdev_wait_for_examine" 00:05:36.355 } 00:05:36.355 ] 00:05:36.355 } 00:05:36.355 ] 00:05:36.355 } 00:05:36.355 [2024-12-06 13:44:35.747182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.613 [2024-12-06 13:44:35.798350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.613 [2024-12-06 13:44:35.871057] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:36.613  [2024-12-06T13:44:36.277Z] Copying: 60/60 [kB] (average 58 MBps) 00:05:36.873 00:05:36.873 13:44:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:05:36.873 13:44:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:36.873 13:44:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:36.873 13:44:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:37.132 [2024-12-06 13:44:36.278599] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:05:37.132 [2024-12-06 13:44:36.278755] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59698 ] 00:05:37.132 { 00:05:37.132 "subsystems": [ 00:05:37.132 { 00:05:37.132 "subsystem": "bdev", 00:05:37.132 "config": [ 00:05:37.132 { 00:05:37.132 "params": { 00:05:37.132 "trtype": "pcie", 00:05:37.132 "traddr": "0000:00:10.0", 00:05:37.132 "name": "Nvme0" 00:05:37.132 }, 00:05:37.132 "method": "bdev_nvme_attach_controller" 00:05:37.132 }, 00:05:37.132 { 00:05:37.132 "method": "bdev_wait_for_examine" 00:05:37.132 } 00:05:37.132 ] 00:05:37.132 } 00:05:37.132 ] 00:05:37.132 } 00:05:37.132 [2024-12-06 13:44:36.425098] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.132 [2024-12-06 13:44:36.471041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.391 [2024-12-06 13:44:36.541861] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:37.391  [2024-12-06T13:44:37.054Z] Copying: 60/60 [kB] (average 58 MBps) 00:05:37.650 00:05:37.651 13:44:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:37.651 13:44:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:05:37.651 13:44:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:37.651 13:44:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:37.651 13:44:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:05:37.651 13:44:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:37.651 13:44:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:37.651 13:44:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:37.651 13:44:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:37.651 13:44:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:37.651 13:44:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:37.651 { 00:05:37.651 "subsystems": [ 00:05:37.651 { 00:05:37.651 "subsystem": "bdev", 00:05:37.651 "config": [ 00:05:37.651 { 00:05:37.651 "params": { 00:05:37.651 "trtype": "pcie", 00:05:37.651 "traddr": "0000:00:10.0", 00:05:37.651 "name": "Nvme0" 00:05:37.651 }, 00:05:37.651 "method": "bdev_nvme_attach_controller" 00:05:37.651 }, 00:05:37.651 { 00:05:37.651 "method": "bdev_wait_for_examine" 00:05:37.651 } 00:05:37.651 ] 00:05:37.651 } 00:05:37.651 ] 00:05:37.651 } 00:05:37.651 [2024-12-06 13:44:36.988449] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:05:37.651 [2024-12-06 13:44:36.988607] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59713 ] 00:05:37.910 [2024-12-06 13:44:37.137005] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.910 [2024-12-06 13:44:37.179944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.910 [2024-12-06 13:44:37.253076] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:38.169  [2024-12-06T13:44:37.832Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:05:38.428 00:05:38.428 13:44:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:05:38.428 13:44:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:38.428 13:44:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:05:38.428 13:44:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:05:38.428 13:44:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:05:38.428 13:44:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:05:38.428 13:44:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:38.428 13:44:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:38.687 13:44:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:05:38.687 13:44:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:38.687 13:44:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:38.687 13:44:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:38.947 { 00:05:38.947 "subsystems": [ 00:05:38.947 { 00:05:38.947 "subsystem": "bdev", 00:05:38.947 "config": [ 00:05:38.947 { 00:05:38.947 "params": { 00:05:38.947 "trtype": "pcie", 00:05:38.947 "traddr": "0000:00:10.0", 00:05:38.947 "name": "Nvme0" 00:05:38.947 }, 00:05:38.947 "method": "bdev_nvme_attach_controller" 00:05:38.947 }, 00:05:38.947 { 00:05:38.947 "method": "bdev_wait_for_examine" 00:05:38.947 } 00:05:38.947 ] 00:05:38.947 } 00:05:38.947 ] 00:05:38.947 } 00:05:38.947 [2024-12-06 13:44:38.100215] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:05:38.947 [2024-12-06 13:44:38.100322] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59732 ] 00:05:38.947 [2024-12-06 13:44:38.243294] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.947 [2024-12-06 13:44:38.296393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.207 [2024-12-06 13:44:38.366821] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:39.207  [2024-12-06T13:44:38.870Z] Copying: 56/56 [kB] (average 54 MBps) 00:05:39.466 00:05:39.466 13:44:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:05:39.466 13:44:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:39.466 13:44:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:39.466 13:44:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:39.466 [2024-12-06 13:44:38.778160] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:05:39.466 [2024-12-06 13:44:38.778275] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59746 ] 00:05:39.466 { 00:05:39.466 "subsystems": [ 00:05:39.466 { 00:05:39.466 "subsystem": "bdev", 00:05:39.466 "config": [ 00:05:39.466 { 00:05:39.466 "params": { 00:05:39.466 "trtype": "pcie", 00:05:39.466 "traddr": "0000:00:10.0", 00:05:39.466 "name": "Nvme0" 00:05:39.466 }, 00:05:39.466 "method": "bdev_nvme_attach_controller" 00:05:39.466 }, 00:05:39.466 { 00:05:39.466 "method": "bdev_wait_for_examine" 00:05:39.466 } 00:05:39.466 ] 00:05:39.466 } 00:05:39.466 ] 00:05:39.466 } 00:05:39.727 [2024-12-06 13:44:38.914563] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.727 [2024-12-06 13:44:38.964447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.727 [2024-12-06 13:44:39.032226] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:39.987  [2024-12-06T13:44:39.391Z] Copying: 56/56 [kB] (average 27 MBps) 00:05:39.987 00:05:39.987 13:44:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:39.987 13:44:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:05:39.987 13:44:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:39.987 13:44:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:39.987 13:44:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:05:39.987 13:44:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:39.987 13:44:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:39.987 13:44:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:39.987 13:44:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:39.987 13:44:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:40.247 13:44:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:40.247 [2024-12-06 13:44:39.441502] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:05:40.247 [2024-12-06 13:44:39.441623] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59767 ] 00:05:40.247 { 00:05:40.247 "subsystems": [ 00:05:40.247 { 00:05:40.247 "subsystem": "bdev", 00:05:40.247 "config": [ 00:05:40.247 { 00:05:40.247 "params": { 00:05:40.247 "trtype": "pcie", 00:05:40.247 "traddr": "0000:00:10.0", 00:05:40.247 "name": "Nvme0" 00:05:40.247 }, 00:05:40.247 "method": "bdev_nvme_attach_controller" 00:05:40.247 }, 00:05:40.247 { 00:05:40.247 "method": "bdev_wait_for_examine" 00:05:40.247 } 00:05:40.247 ] 00:05:40.247 } 00:05:40.247 ] 00:05:40.247 } 00:05:40.247 [2024-12-06 13:44:39.587725] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.247 [2024-12-06 13:44:39.634873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.507 [2024-12-06 13:44:39.705165] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:40.507  [2024-12-06T13:44:40.170Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:05:40.766 00:05:40.766 13:44:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:40.766 13:44:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:05:40.766 13:44:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:05:40.766 13:44:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:05:40.766 13:44:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:05:40.766 13:44:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:40.766 13:44:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:41.336 13:44:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:05:41.336 13:44:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:41.336 13:44:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:41.336 13:44:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:41.336 { 00:05:41.336 "subsystems": [ 00:05:41.336 { 00:05:41.336 "subsystem": "bdev", 00:05:41.336 "config": [ 00:05:41.336 { 00:05:41.336 "params": { 00:05:41.336 "trtype": "pcie", 00:05:41.336 "traddr": "0000:00:10.0", 00:05:41.336 "name": "Nvme0" 00:05:41.336 }, 00:05:41.336 "method": "bdev_nvme_attach_controller" 00:05:41.336 }, 00:05:41.336 { 00:05:41.336 "method": "bdev_wait_for_examine" 00:05:41.336 } 00:05:41.336 ] 00:05:41.336 } 00:05:41.336 ] 00:05:41.336 } 00:05:41.336 [2024-12-06 13:44:40.586911] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:05:41.336 [2024-12-06 13:44:40.587078] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59786 ] 00:05:41.336 [2024-12-06 13:44:40.737233] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.596 [2024-12-06 13:44:40.781545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.596 [2024-12-06 13:44:40.849652] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:41.596  [2024-12-06T13:44:41.259Z] Copying: 56/56 [kB] (average 54 MBps) 00:05:41.855 00:05:41.855 13:44:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:05:41.855 13:44:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:41.855 13:44:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:41.855 13:44:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:42.115 { 00:05:42.115 "subsystems": [ 00:05:42.115 { 00:05:42.115 "subsystem": "bdev", 00:05:42.115 "config": [ 00:05:42.115 { 00:05:42.115 "params": { 00:05:42.115 "trtype": "pcie", 00:05:42.115 "traddr": "0000:00:10.0", 00:05:42.115 "name": "Nvme0" 00:05:42.115 }, 00:05:42.115 "method": "bdev_nvme_attach_controller" 00:05:42.115 }, 00:05:42.115 { 00:05:42.115 "method": "bdev_wait_for_examine" 00:05:42.115 } 00:05:42.115 ] 00:05:42.115 } 00:05:42.115 ] 00:05:42.115 } 00:05:42.115 [2024-12-06 13:44:41.261929] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:05:42.115 [2024-12-06 13:44:41.262029] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59799 ] 00:05:42.115 [2024-12-06 13:44:41.407874] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.115 [2024-12-06 13:44:41.454409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.374 [2024-12-06 13:44:41.525191] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:42.374  [2024-12-06T13:44:42.038Z] Copying: 56/56 [kB] (average 54 MBps) 00:05:42.634 00:05:42.634 13:44:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:42.634 13:44:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:05:42.634 13:44:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:42.634 13:44:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:42.634 13:44:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:05:42.634 13:44:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:42.634 13:44:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:42.634 13:44:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:42.634 13:44:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:42.634 13:44:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:42.634 13:44:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:42.634 [2024-12-06 13:44:41.937546] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:05:42.634 { 00:05:42.634 "subsystems": [ 00:05:42.634 { 00:05:42.634 "subsystem": "bdev", 00:05:42.634 "config": [ 00:05:42.634 { 00:05:42.634 "params": { 00:05:42.634 "trtype": "pcie", 00:05:42.634 "traddr": "0000:00:10.0", 00:05:42.634 "name": "Nvme0" 00:05:42.634 }, 00:05:42.634 "method": "bdev_nvme_attach_controller" 00:05:42.634 }, 00:05:42.634 { 00:05:42.634 "method": "bdev_wait_for_examine" 00:05:42.634 } 00:05:42.634 ] 00:05:42.634 } 00:05:42.634 ] 00:05:42.634 } 00:05:42.634 [2024-12-06 13:44:41.937678] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59815 ] 00:05:42.893 [2024-12-06 13:44:42.081569] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.893 [2024-12-06 13:44:42.132573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.893 [2024-12-06 13:44:42.201585] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:43.153  [2024-12-06T13:44:42.557Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:05:43.153 00:05:43.153 13:44:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:05:43.153 13:44:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:43.153 13:44:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:05:43.153 13:44:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:05:43.153 13:44:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:05:43.153 13:44:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:05:43.153 13:44:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:43.153 13:44:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:43.720 13:44:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:05:43.720 13:44:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:43.720 13:44:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:43.720 13:44:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:43.720 [2024-12-06 13:44:42.989870] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:05:43.720 [2024-12-06 13:44:42.990611] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59834 ] 00:05:43.720 { 00:05:43.720 "subsystems": [ 00:05:43.720 { 00:05:43.720 "subsystem": "bdev", 00:05:43.720 "config": [ 00:05:43.720 { 00:05:43.720 "params": { 00:05:43.720 "trtype": "pcie", 00:05:43.720 "traddr": "0000:00:10.0", 00:05:43.720 "name": "Nvme0" 00:05:43.720 }, 00:05:43.720 "method": "bdev_nvme_attach_controller" 00:05:43.720 }, 00:05:43.720 { 00:05:43.720 "method": "bdev_wait_for_examine" 00:05:43.720 } 00:05:43.720 ] 00:05:43.720 } 00:05:43.720 ] 00:05:43.720 } 00:05:43.979 [2024-12-06 13:44:43.131932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.979 [2024-12-06 13:44:43.181433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.979 [2024-12-06 13:44:43.252283] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:43.979  [2024-12-06T13:44:43.642Z] Copying: 48/48 [kB] (average 46 MBps) 00:05:44.238 00:05:44.238 13:44:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:05:44.238 13:44:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:44.238 13:44:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:44.238 13:44:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:44.497 { 00:05:44.497 "subsystems": [ 00:05:44.497 { 00:05:44.497 "subsystem": "bdev", 00:05:44.497 "config": [ 00:05:44.497 { 00:05:44.497 "params": { 00:05:44.497 "trtype": "pcie", 00:05:44.497 "traddr": "0000:00:10.0", 00:05:44.497 "name": "Nvme0" 00:05:44.497 }, 00:05:44.497 "method": "bdev_nvme_attach_controller" 00:05:44.497 }, 00:05:44.497 { 00:05:44.497 "method": "bdev_wait_for_examine" 00:05:44.497 } 00:05:44.497 ] 00:05:44.497 } 00:05:44.497 ] 00:05:44.497 } 00:05:44.497 [2024-12-06 13:44:43.663925] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:05:44.497 [2024-12-06 13:44:43.664029] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59853 ] 00:05:44.497 [2024-12-06 13:44:43.804999] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.497 [2024-12-06 13:44:43.850397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.756 [2024-12-06 13:44:43.921288] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:44.756  [2024-12-06T13:44:44.419Z] Copying: 48/48 [kB] (average 46 MBps) 00:05:45.015 00:05:45.015 13:44:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:45.015 13:44:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:05:45.015 13:44:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:45.015 13:44:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:45.015 13:44:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:05:45.015 13:44:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:45.015 13:44:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:45.015 13:44:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:45.015 13:44:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:45.015 13:44:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:45.015 13:44:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:45.015 [2024-12-06 13:44:44.322038] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:05:45.015 [2024-12-06 13:44:44.322153] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59868 ] 00:05:45.015 { 00:05:45.015 "subsystems": [ 00:05:45.015 { 00:05:45.015 "subsystem": "bdev", 00:05:45.015 "config": [ 00:05:45.015 { 00:05:45.015 "params": { 00:05:45.015 "trtype": "pcie", 00:05:45.015 "traddr": "0000:00:10.0", 00:05:45.015 "name": "Nvme0" 00:05:45.015 }, 00:05:45.015 "method": "bdev_nvme_attach_controller" 00:05:45.016 }, 00:05:45.016 { 00:05:45.016 "method": "bdev_wait_for_examine" 00:05:45.016 } 00:05:45.016 ] 00:05:45.016 } 00:05:45.016 ] 00:05:45.016 } 00:05:45.274 [2024-12-06 13:44:44.461226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.274 [2024-12-06 13:44:44.502929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.274 [2024-12-06 13:44:44.573160] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:45.533  [2024-12-06T13:44:44.937Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:05:45.533 00:05:45.533 13:44:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:45.533 13:44:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:05:45.533 13:44:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:05:45.533 13:44:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:05:45.533 13:44:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:05:45.533 13:44:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:45.533 13:44:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:46.100 13:44:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:05:46.100 13:44:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:46.100 13:44:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:46.100 13:44:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:46.100 { 00:05:46.100 "subsystems": [ 00:05:46.100 { 00:05:46.100 "subsystem": "bdev", 00:05:46.100 "config": [ 00:05:46.100 { 00:05:46.100 "params": { 00:05:46.100 "trtype": "pcie", 00:05:46.100 "traddr": "0000:00:10.0", 00:05:46.100 "name": "Nvme0" 00:05:46.100 }, 00:05:46.100 "method": "bdev_nvme_attach_controller" 00:05:46.100 }, 00:05:46.100 { 00:05:46.100 "method": "bdev_wait_for_examine" 00:05:46.100 } 00:05:46.100 ] 00:05:46.100 } 00:05:46.100 ] 00:05:46.100 } 00:05:46.100 [2024-12-06 13:44:45.343159] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:05:46.100 [2024-12-06 13:44:45.343271] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59887 ] 00:05:46.100 [2024-12-06 13:44:45.488345] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.359 [2024-12-06 13:44:45.536920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.359 [2024-12-06 13:44:45.606517] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:46.359  [2024-12-06T13:44:46.022Z] Copying: 48/48 [kB] (average 46 MBps) 00:05:46.618 00:05:46.618 13:44:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:05:46.618 13:44:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:46.618 13:44:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:46.618 13:44:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:46.618 [2024-12-06 13:44:45.999038] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:05:46.618 [2024-12-06 13:44:45.999127] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59901 ] 00:05:46.618 { 00:05:46.618 "subsystems": [ 00:05:46.618 { 00:05:46.618 "subsystem": "bdev", 00:05:46.618 "config": [ 00:05:46.618 { 00:05:46.618 "params": { 00:05:46.618 "trtype": "pcie", 00:05:46.618 "traddr": "0000:00:10.0", 00:05:46.618 "name": "Nvme0" 00:05:46.618 }, 00:05:46.618 "method": "bdev_nvme_attach_controller" 00:05:46.618 }, 00:05:46.618 { 00:05:46.618 "method": "bdev_wait_for_examine" 00:05:46.618 } 00:05:46.618 ] 00:05:46.618 } 00:05:46.618 ] 00:05:46.618 } 00:05:46.877 [2024-12-06 13:44:46.136820] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.877 [2024-12-06 13:44:46.184015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.877 [2024-12-06 13:44:46.254216] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:47.137  [2024-12-06T13:44:46.817Z] Copying: 48/48 [kB] (average 46 MBps) 00:05:47.413 00:05:47.413 13:44:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:47.413 13:44:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:05:47.413 13:44:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:47.413 13:44:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:47.413 13:44:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:05:47.413 13:44:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:47.413 13:44:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:47.413 13:44:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:47.413 13:44:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:47.413 13:44:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:47.413 13:44:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:47.413 [2024-12-06 13:44:46.661216] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:05:47.413 [2024-12-06 13:44:46.661294] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59922 ] 00:05:47.413 { 00:05:47.413 "subsystems": [ 00:05:47.413 { 00:05:47.413 "subsystem": "bdev", 00:05:47.413 "config": [ 00:05:47.413 { 00:05:47.413 "params": { 00:05:47.413 "trtype": "pcie", 00:05:47.413 "traddr": "0000:00:10.0", 00:05:47.413 "name": "Nvme0" 00:05:47.413 }, 00:05:47.413 "method": "bdev_nvme_attach_controller" 00:05:47.413 }, 00:05:47.413 { 00:05:47.413 "method": "bdev_wait_for_examine" 00:05:47.413 } 00:05:47.413 ] 00:05:47.413 } 00:05:47.413 ] 00:05:47.413 } 00:05:47.413 [2024-12-06 13:44:46.801171] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.722 [2024-12-06 13:44:46.856914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.722 [2024-12-06 13:44:46.930648] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:47.722  [2024-12-06T13:44:47.385Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:05:47.981 00:05:47.981 00:05:47.981 real 0m14.682s 00:05:47.981 user 0m10.417s 00:05:47.981 sys 0m6.601s 00:05:47.981 13:44:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:47.981 13:44:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:47.981 ************************************ 00:05:47.981 END TEST dd_rw 00:05:47.981 ************************************ 00:05:47.981 13:44:47 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:05:47.981 13:44:47 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:47.981 13:44:47 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:47.981 13:44:47 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:47.981 ************************************ 00:05:47.981 START TEST dd_rw_offset 00:05:47.981 ************************************ 00:05:47.981 13:44:47 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1129 -- # basic_offset 00:05:47.981 13:44:47 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:05:47.981 13:44:47 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:05:47.981 13:44:47 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:05:47.981 13:44:47 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:05:48.240 13:44:47 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:05:48.241 13:44:47 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=oq78yvadrnbqx7s2rh3freu97rysimzv0d44j7n3di0zjxuno80ddew25pe6mcl8lnkg6a5xvkrq42npywwyyq88rql6wyqrd787n4gagkyl53ar7w8e8qlj8jwdz2fs5mf2m5jf91o966o3elvrbisn409lyirnfljwzsrzvrsihwrdiz8ml6uv75ca9jkxqrqyqo26ewub91t686h9gsaef0va4ict3nj42i93dcx5w2lk68xw4f656ysi48a9egtkinjbk2brsrj1gjccf74zp0w40wip4yjusdab0q2bx0fpl6e9pwhhg3l4cdiz58gmcv1gwn7x0b8uottkidmlayw9dtwer7citiqd3o8mp2oexlezh0botq358m0elp8lkcf7y4h5ahote65aivsbvs8053tawfxhnxv2vu49ol0bpjmly3tmey342szb0dph8y1lkokzre2b387ymmmdxfje8v3bx685ags4vu432wg6lcmgtm3gauo2pszr5kviyq7cqqzi0n0z83zfh6434sv2z3gg2x425edb78s2nry7qixyws3xb1ilernb790f0pcan6oppmx3gjfo6hhjg16kabijnogovv1fvfek9y07wkbgcy6t76c2fl9ulbxx2owhfofsfcheiuoksl0he2x73zm4xsomqsj9xx6avxou627c5y27mjdwry5qchnbp9xnf6de00q93dwnqi9oxpgbc4hpgd8gpwdg8lbivt1zo5qw6bd3rluehxw66m1wtaq08qr2wk7tk6j73k601fqyjj2g3dudiajraf52n4oat6el51klju6n2jqb9p7dldwem8n8ntyka1lpgznbhyn9vkynwh1pk27tsmvx7mufc90kbv5p8jmahhlkejk3exr3q3tz620l3fxr0txuxftqhkhzchkzlyaa34sx6o9s5aroephk74xr9fyznbwafcsltnklhhwucucav5a65ht185ea0olm5ux8w66p8air5ngi0bf2hmy2k9t3muxqx92m9mfn8n94m1vkb2l4amcy2hkdlagrwsshdsfbujronkkrafn9z23cfwvju1rpu3omhqaebr7u1xzovy2j7huax280pjbkpzdsjkci3a5bxwfd5nhfsa81b1prir4w8fpkw3e1g0uwz8plavf7wt1yy3egijidwindvnmtv5ljeh6qnfmkm6ib1g19cp3ree41xdvmog7qbiuzwrnrz5xm0utg0eiutkp9nl80fxbm4k6x4fflgxa7x0fg2cho62oh4rmsvyslr4helllljkcs89wcuc67vcbcog9memf6jivj06t6mrpg6ttpszcymjqmfdlih0b8nbzb9de4idx8zglgljshww7xn05t9kaf7tcrfjx09y8bbmk6tgtw8wi9gwyyv538ljdz3n8coi1ev761095jtnymi7vgltr4eacllojzy988x88gms6hzu9tdbmibxgz78r7gkx0yuq8nerj4bfgtg72337lb94g9i4sspyodddk8rf44pkcgwnzkjp59k4qehglxhuv58xe4buwfy71r6q9cxzuimkm81egce5rs47o6tweryak4d8o5nsslvs0atfmgxbs8afiyubevr0q1pctepl4oaq812j51xsct1gj1wr4zax615k4i1wbnggx4kuhf3mjq61v2eowken785cwblhoco9bseimyw6fa9zsegkiben0cly3hfhfsk7p76ebusjyxetkmvkc9xrndyvmlfl0305xjwk2jkxsm7apmaj0ckp1jbl60srg0prbvvqf4x6pa8b59lmgb67utldi4hd700i2z8saih66mk9ca1bavybfxblns2304wntlpnpx4lnplkwfhvsfucmvida0nwxhwm0l0o7a55cn2donpjr9zrekrxol1wpfpn0t720satby2k9ijzxzgnj7lrw5hm5o8isb9en4et0091fbgfer3q8yhxxmt78uv4od49dyn53or34od1mynhp48fdg9m8b6z15xmxiq33s73eoypxeepci6ipep9532mh3pdglx7qv166rtvtih4o7ddz9ovwj15bgy13un5mn9tqei49g3p29v457unbvv269g79hcvujd7zylhtx3gmho6vhwqjm7t749iitkzao78u76ugxjbjysocva3v9cfcgzex6hn3ww3q4e5f0snds0r4d3lrpqvpid9lti79wyg2457794s78k6fqn1y8yubi3u9gqscadp64otl9du9ox1inxgyhhov4ttd83p5gtxoqf31j6f1wfc1d46fbfqtkrwupsac5xobsqkqus43t50dhnv7cdzt4fkb9btnjybmynv1ufm9klqzvnchtarvk822fpui4fs6te6oztroxju3sgrd0q9r282fpjb1zs6z41e2hxk7iirf4glrt1a4isfee5djanz4nxqxt6ribhru9ox29fudy1ry5fws1tzsdooiyxct2vusn918z3qkd4lamjwh890uxn8y0i9j5vo1a9cz5oqv5d4cn74x7fymp82prtc7rdvn7hbv9s0lj9adnszamu8xk0ciyx2fngzz15koilzvdfk2rg4l1dprg1fvogn9l5pmvu8tq5uovnhfc8b1doi6myo6d6y8mx2kirujwpya76uqwxhb2gxzsye91of9k5erpy2qhfwihorvwen9znchpvn78su6t9et0vu7qobtw3xo5q6b6oq5jk2ep7bxa1h2hzsg2ecvb896td3yuka0tohm3pgjeqj1gi2f8o0oxq0hp74obypsxgrld0kiuxr52qy0tk08vj5760ixewwhqwg64k3c6cxjd15x0qi6xatp7kiq0x4pfotcumivbj458pxbkeu3vzpc4bea3fq4kyixa2ewa6fltbjthu2vcm5oery8idsxabmqarpmhb9txmnpkcx8aktqyfw9c3pofxq5qfp5b0er1oxxisdws7u5ht2d3bq7vibbbl9t4gafu9lbj4x9d4x0sk3iem55q5odfupz2916vpthqfm2vn381k4gmbmir2wyzbw9xm4re955taemfxnufsig2adalt7ueexmow1v0fajhh0v0i1mpgl2j95muiem9bqdvk6uoo9oca7qvjkl3hjcjiweust67x3pqnbpsmlckv23jsaz1xsp0grnco0lzllm5d3wl5uf71lsjlcfd97w8pq6edmmlf4mc8dt86j9exqel337tjonbofhy4yw7sntu7wkog1kh2swvzpr9r9stnkfavznt59vz70h6xxep6kcxyw1fieb7kynxx6ginicgqnj8ujkzmk2tk72lm38ttkw5y4x5n4a8nhb61vzcfhjhginvb5iziohnmw4wx1fgn95v1ygnijhi2nf1ssznao53g28udr03ldssx6xpf4vkqpvul9dl2ijuqwh35sdypzgezpemi1002s89p25zgazc93270tet5ubcipgl9f9ttensiz5sqesyenxdppikf34jjw7hztodwyjesc31dbn6f85m5zzvx8ul8ohnudtef9dnaguqx14wl13zw36e0jjho7jg35jrfmo9ooqfxselu983kadf8d4x9i8u16ro5174lfmyxs63k1n1lhypxzj3f55p77wzryj4xbj3yu7ouu8rx99v52arwklh4ovrlhqintbv94k5f9frzrtdz324a302zr012n7ab1p96cfrr6a5jp5ax8xgn2lvz1jphfyl7vedcpem7v3kvlkxw0hsqbtmcqdyfxa31nm1t393ipkeexx07yc8bicn5xfxfa386vzcsn51rffudr8htd043o72y8078mli9wpg6qkusj2qvpi5xjnc8shiqkwph9dcku4rpyjnlb2cc3xoaefcjvshzy8t75fj5g9hg4sbwejsu7sjosm0stvv9dg1p6cb4ibmog1mr1cyyktbg61ll2lpxkjefwa6wigblculi89rc64stfe5tyz1d5dh4nou4zu1opur5un9t1whi98z768m5kdmzldt73lehy5rvf0hzysfwt3x58931a0ywybzuvgm4prt8j2 00:05:48.241 13:44:47 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:05:48.241 13:44:47 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:05:48.241 13:44:47 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:05:48.241 13:44:47 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:05:48.241 { 00:05:48.241 "subsystems": [ 00:05:48.241 { 00:05:48.241 "subsystem": "bdev", 00:05:48.241 "config": [ 00:05:48.241 { 00:05:48.241 "params": { 00:05:48.241 "trtype": "pcie", 00:05:48.241 "traddr": "0000:00:10.0", 00:05:48.241 "name": "Nvme0" 00:05:48.241 }, 00:05:48.241 "method": "bdev_nvme_attach_controller" 00:05:48.241 }, 00:05:48.241 { 00:05:48.241 "method": "bdev_wait_for_examine" 00:05:48.241 } 00:05:48.241 ] 00:05:48.241 } 00:05:48.241 ] 00:05:48.241 } 00:05:48.241 [2024-12-06 13:44:47.448390] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:05:48.241 [2024-12-06 13:44:47.448502] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59952 ] 00:05:48.241 [2024-12-06 13:44:47.593429] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.241 [2024-12-06 13:44:47.637170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.499 [2024-12-06 13:44:47.708884] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:48.499  [2024-12-06T13:44:48.161Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:05:48.757 00:05:48.757 13:44:48 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:05:48.757 13:44:48 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:05:48.757 13:44:48 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:05:48.757 13:44:48 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:05:48.757 { 00:05:48.757 "subsystems": [ 00:05:48.757 { 00:05:48.757 "subsystem": "bdev", 00:05:48.757 "config": [ 00:05:48.757 { 00:05:48.757 "params": { 00:05:48.757 "trtype": "pcie", 00:05:48.757 "traddr": "0000:00:10.0", 00:05:48.757 "name": "Nvme0" 00:05:48.757 }, 00:05:48.757 "method": "bdev_nvme_attach_controller" 00:05:48.757 }, 00:05:48.757 { 00:05:48.757 "method": "bdev_wait_for_examine" 00:05:48.757 } 00:05:48.757 ] 00:05:48.757 } 00:05:48.757 ] 00:05:48.757 } 00:05:48.757 [2024-12-06 13:44:48.123356] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:05:48.757 [2024-12-06 13:44:48.123489] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59966 ] 00:05:49.016 [2024-12-06 13:44:48.268645] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.016 [2024-12-06 13:44:48.310493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.016 [2024-12-06 13:44:48.380278] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:49.275  [2024-12-06T13:44:48.939Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:05:49.535 00:05:49.535 13:44:48 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:05:49.536 13:44:48 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ oq78yvadrnbqx7s2rh3freu97rysimzv0d44j7n3di0zjxuno80ddew25pe6mcl8lnkg6a5xvkrq42npywwyyq88rql6wyqrd787n4gagkyl53ar7w8e8qlj8jwdz2fs5mf2m5jf91o966o3elvrbisn409lyirnfljwzsrzvrsihwrdiz8ml6uv75ca9jkxqrqyqo26ewub91t686h9gsaef0va4ict3nj42i93dcx5w2lk68xw4f656ysi48a9egtkinjbk2brsrj1gjccf74zp0w40wip4yjusdab0q2bx0fpl6e9pwhhg3l4cdiz58gmcv1gwn7x0b8uottkidmlayw9dtwer7citiqd3o8mp2oexlezh0botq358m0elp8lkcf7y4h5ahote65aivsbvs8053tawfxhnxv2vu49ol0bpjmly3tmey342szb0dph8y1lkokzre2b387ymmmdxfje8v3bx685ags4vu432wg6lcmgtm3gauo2pszr5kviyq7cqqzi0n0z83zfh6434sv2z3gg2x425edb78s2nry7qixyws3xb1ilernb790f0pcan6oppmx3gjfo6hhjg16kabijnogovv1fvfek9y07wkbgcy6t76c2fl9ulbxx2owhfofsfcheiuoksl0he2x73zm4xsomqsj9xx6avxou627c5y27mjdwry5qchnbp9xnf6de00q93dwnqi9oxpgbc4hpgd8gpwdg8lbivt1zo5qw6bd3rluehxw66m1wtaq08qr2wk7tk6j73k601fqyjj2g3dudiajraf52n4oat6el51klju6n2jqb9p7dldwem8n8ntyka1lpgznbhyn9vkynwh1pk27tsmvx7mufc90kbv5p8jmahhlkejk3exr3q3tz620l3fxr0txuxftqhkhzchkzlyaa34sx6o9s5aroephk74xr9fyznbwafcsltnklhhwucucav5a65ht185ea0olm5ux8w66p8air5ngi0bf2hmy2k9t3muxqx92m9mfn8n94m1vkb2l4amcy2hkdlagrwsshdsfbujronkkrafn9z23cfwvju1rpu3omhqaebr7u1xzovy2j7huax280pjbkpzdsjkci3a5bxwfd5nhfsa81b1prir4w8fpkw3e1g0uwz8plavf7wt1yy3egijidwindvnmtv5ljeh6qnfmkm6ib1g19cp3ree41xdvmog7qbiuzwrnrz5xm0utg0eiutkp9nl80fxbm4k6x4fflgxa7x0fg2cho62oh4rmsvyslr4helllljkcs89wcuc67vcbcog9memf6jivj06t6mrpg6ttpszcymjqmfdlih0b8nbzb9de4idx8zglgljshww7xn05t9kaf7tcrfjx09y8bbmk6tgtw8wi9gwyyv538ljdz3n8coi1ev761095jtnymi7vgltr4eacllojzy988x88gms6hzu9tdbmibxgz78r7gkx0yuq8nerj4bfgtg72337lb94g9i4sspyodddk8rf44pkcgwnzkjp59k4qehglxhuv58xe4buwfy71r6q9cxzuimkm81egce5rs47o6tweryak4d8o5nsslvs0atfmgxbs8afiyubevr0q1pctepl4oaq812j51xsct1gj1wr4zax615k4i1wbnggx4kuhf3mjq61v2eowken785cwblhoco9bseimyw6fa9zsegkiben0cly3hfhfsk7p76ebusjyxetkmvkc9xrndyvmlfl0305xjwk2jkxsm7apmaj0ckp1jbl60srg0prbvvqf4x6pa8b59lmgb67utldi4hd700i2z8saih66mk9ca1bavybfxblns2304wntlpnpx4lnplkwfhvsfucmvida0nwxhwm0l0o7a55cn2donpjr9zrekrxol1wpfpn0t720satby2k9ijzxzgnj7lrw5hm5o8isb9en4et0091fbgfer3q8yhxxmt78uv4od49dyn53or34od1mynhp48fdg9m8b6z15xmxiq33s73eoypxeepci6ipep9532mh3pdglx7qv166rtvtih4o7ddz9ovwj15bgy13un5mn9tqei49g3p29v457unbvv269g79hcvujd7zylhtx3gmho6vhwqjm7t749iitkzao78u76ugxjbjysocva3v9cfcgzex6hn3ww3q4e5f0snds0r4d3lrpqvpid9lti79wyg2457794s78k6fqn1y8yubi3u9gqscadp64otl9du9ox1inxgyhhov4ttd83p5gtxoqf31j6f1wfc1d46fbfqtkrwupsac5xobsqkqus43t50dhnv7cdzt4fkb9btnjybmynv1ufm9klqzvnchtarvk822fpui4fs6te6oztroxju3sgrd0q9r282fpjb1zs6z41e2hxk7iirf4glrt1a4isfee5djanz4nxqxt6ribhru9ox29fudy1ry5fws1tzsdooiyxct2vusn918z3qkd4lamjwh890uxn8y0i9j5vo1a9cz5oqv5d4cn74x7fymp82prtc7rdvn7hbv9s0lj9adnszamu8xk0ciyx2fngzz15koilzvdfk2rg4l1dprg1fvogn9l5pmvu8tq5uovnhfc8b1doi6myo6d6y8mx2kirujwpya76uqwxhb2gxzsye91of9k5erpy2qhfwihorvwen9znchpvn78su6t9et0vu7qobtw3xo5q6b6oq5jk2ep7bxa1h2hzsg2ecvb896td3yuka0tohm3pgjeqj1gi2f8o0oxq0hp74obypsxgrld0kiuxr52qy0tk08vj5760ixewwhqwg64k3c6cxjd15x0qi6xatp7kiq0x4pfotcumivbj458pxbkeu3vzpc4bea3fq4kyixa2ewa6fltbjthu2vcm5oery8idsxabmqarpmhb9txmnpkcx8aktqyfw9c3pofxq5qfp5b0er1oxxisdws7u5ht2d3bq7vibbbl9t4gafu9lbj4x9d4x0sk3iem55q5odfupz2916vpthqfm2vn381k4gmbmir2wyzbw9xm4re955taemfxnufsig2adalt7ueexmow1v0fajhh0v0i1mpgl2j95muiem9bqdvk6uoo9oca7qvjkl3hjcjiweust67x3pqnbpsmlckv23jsaz1xsp0grnco0lzllm5d3wl5uf71lsjlcfd97w8pq6edmmlf4mc8dt86j9exqel337tjonbofhy4yw7sntu7wkog1kh2swvzpr9r9stnkfavznt59vz70h6xxep6kcxyw1fieb7kynxx6ginicgqnj8ujkzmk2tk72lm38ttkw5y4x5n4a8nhb61vzcfhjhginvb5iziohnmw4wx1fgn95v1ygnijhi2nf1ssznao53g28udr03ldssx6xpf4vkqpvul9dl2ijuqwh35sdypzgezpemi1002s89p25zgazc93270tet5ubcipgl9f9ttensiz5sqesyenxdppikf34jjw7hztodwyjesc31dbn6f85m5zzvx8ul8ohnudtef9dnaguqx14wl13zw36e0jjho7jg35jrfmo9ooqfxselu983kadf8d4x9i8u16ro5174lfmyxs63k1n1lhypxzj3f55p77wzryj4xbj3yu7ouu8rx99v52arwklh4ovrlhqintbv94k5f9frzrtdz324a302zr012n7ab1p96cfrr6a5jp5ax8xgn2lvz1jphfyl7vedcpem7v3kvlkxw0hsqbtmcqdyfxa31nm1t393ipkeexx07yc8bicn5xfxfa386vzcsn51rffudr8htd043o72y8078mli9wpg6qkusj2qvpi5xjnc8shiqkwph9dcku4rpyjnlb2cc3xoaefcjvshzy8t75fj5g9hg4sbwejsu7sjosm0stvv9dg1p6cb4ibmog1mr1cyyktbg61ll2lpxkjefwa6wigblculi89rc64stfe5tyz1d5dh4nou4zu1opur5un9t1whi98z768m5kdmzldt73lehy5rvf0hzysfwt3x58931a0ywybzuvgm4prt8j2 == \o\q\7\8\y\v\a\d\r\n\b\q\x\7\s\2\r\h\3\f\r\e\u\9\7\r\y\s\i\m\z\v\0\d\4\4\j\7\n\3\d\i\0\z\j\x\u\n\o\8\0\d\d\e\w\2\5\p\e\6\m\c\l\8\l\n\k\g\6\a\5\x\v\k\r\q\4\2\n\p\y\w\w\y\y\q\8\8\r\q\l\6\w\y\q\r\d\7\8\7\n\4\g\a\g\k\y\l\5\3\a\r\7\w\8\e\8\q\l\j\8\j\w\d\z\2\f\s\5\m\f\2\m\5\j\f\9\1\o\9\6\6\o\3\e\l\v\r\b\i\s\n\4\0\9\l\y\i\r\n\f\l\j\w\z\s\r\z\v\r\s\i\h\w\r\d\i\z\8\m\l\6\u\v\7\5\c\a\9\j\k\x\q\r\q\y\q\o\2\6\e\w\u\b\9\1\t\6\8\6\h\9\g\s\a\e\f\0\v\a\4\i\c\t\3\n\j\4\2\i\9\3\d\c\x\5\w\2\l\k\6\8\x\w\4\f\6\5\6\y\s\i\4\8\a\9\e\g\t\k\i\n\j\b\k\2\b\r\s\r\j\1\g\j\c\c\f\7\4\z\p\0\w\4\0\w\i\p\4\y\j\u\s\d\a\b\0\q\2\b\x\0\f\p\l\6\e\9\p\w\h\h\g\3\l\4\c\d\i\z\5\8\g\m\c\v\1\g\w\n\7\x\0\b\8\u\o\t\t\k\i\d\m\l\a\y\w\9\d\t\w\e\r\7\c\i\t\i\q\d\3\o\8\m\p\2\o\e\x\l\e\z\h\0\b\o\t\q\3\5\8\m\0\e\l\p\8\l\k\c\f\7\y\4\h\5\a\h\o\t\e\6\5\a\i\v\s\b\v\s\8\0\5\3\t\a\w\f\x\h\n\x\v\2\v\u\4\9\o\l\0\b\p\j\m\l\y\3\t\m\e\y\3\4\2\s\z\b\0\d\p\h\8\y\1\l\k\o\k\z\r\e\2\b\3\8\7\y\m\m\m\d\x\f\j\e\8\v\3\b\x\6\8\5\a\g\s\4\v\u\4\3\2\w\g\6\l\c\m\g\t\m\3\g\a\u\o\2\p\s\z\r\5\k\v\i\y\q\7\c\q\q\z\i\0\n\0\z\8\3\z\f\h\6\4\3\4\s\v\2\z\3\g\g\2\x\4\2\5\e\d\b\7\8\s\2\n\r\y\7\q\i\x\y\w\s\3\x\b\1\i\l\e\r\n\b\7\9\0\f\0\p\c\a\n\6\o\p\p\m\x\3\g\j\f\o\6\h\h\j\g\1\6\k\a\b\i\j\n\o\g\o\v\v\1\f\v\f\e\k\9\y\0\7\w\k\b\g\c\y\6\t\7\6\c\2\f\l\9\u\l\b\x\x\2\o\w\h\f\o\f\s\f\c\h\e\i\u\o\k\s\l\0\h\e\2\x\7\3\z\m\4\x\s\o\m\q\s\j\9\x\x\6\a\v\x\o\u\6\2\7\c\5\y\2\7\m\j\d\w\r\y\5\q\c\h\n\b\p\9\x\n\f\6\d\e\0\0\q\9\3\d\w\n\q\i\9\o\x\p\g\b\c\4\h\p\g\d\8\g\p\w\d\g\8\l\b\i\v\t\1\z\o\5\q\w\6\b\d\3\r\l\u\e\h\x\w\6\6\m\1\w\t\a\q\0\8\q\r\2\w\k\7\t\k\6\j\7\3\k\6\0\1\f\q\y\j\j\2\g\3\d\u\d\i\a\j\r\a\f\5\2\n\4\o\a\t\6\e\l\5\1\k\l\j\u\6\n\2\j\q\b\9\p\7\d\l\d\w\e\m\8\n\8\n\t\y\k\a\1\l\p\g\z\n\b\h\y\n\9\v\k\y\n\w\h\1\p\k\2\7\t\s\m\v\x\7\m\u\f\c\9\0\k\b\v\5\p\8\j\m\a\h\h\l\k\e\j\k\3\e\x\r\3\q\3\t\z\6\2\0\l\3\f\x\r\0\t\x\u\x\f\t\q\h\k\h\z\c\h\k\z\l\y\a\a\3\4\s\x\6\o\9\s\5\a\r\o\e\p\h\k\7\4\x\r\9\f\y\z\n\b\w\a\f\c\s\l\t\n\k\l\h\h\w\u\c\u\c\a\v\5\a\6\5\h\t\1\8\5\e\a\0\o\l\m\5\u\x\8\w\6\6\p\8\a\i\r\5\n\g\i\0\b\f\2\h\m\y\2\k\9\t\3\m\u\x\q\x\9\2\m\9\m\f\n\8\n\9\4\m\1\v\k\b\2\l\4\a\m\c\y\2\h\k\d\l\a\g\r\w\s\s\h\d\s\f\b\u\j\r\o\n\k\k\r\a\f\n\9\z\2\3\c\f\w\v\j\u\1\r\p\u\3\o\m\h\q\a\e\b\r\7\u\1\x\z\o\v\y\2\j\7\h\u\a\x\2\8\0\p\j\b\k\p\z\d\s\j\k\c\i\3\a\5\b\x\w\f\d\5\n\h\f\s\a\8\1\b\1\p\r\i\r\4\w\8\f\p\k\w\3\e\1\g\0\u\w\z\8\p\l\a\v\f\7\w\t\1\y\y\3\e\g\i\j\i\d\w\i\n\d\v\n\m\t\v\5\l\j\e\h\6\q\n\f\m\k\m\6\i\b\1\g\1\9\c\p\3\r\e\e\4\1\x\d\v\m\o\g\7\q\b\i\u\z\w\r\n\r\z\5\x\m\0\u\t\g\0\e\i\u\t\k\p\9\n\l\8\0\f\x\b\m\4\k\6\x\4\f\f\l\g\x\a\7\x\0\f\g\2\c\h\o\6\2\o\h\4\r\m\s\v\y\s\l\r\4\h\e\l\l\l\l\j\k\c\s\8\9\w\c\u\c\6\7\v\c\b\c\o\g\9\m\e\m\f\6\j\i\v\j\0\6\t\6\m\r\p\g\6\t\t\p\s\z\c\y\m\j\q\m\f\d\l\i\h\0\b\8\n\b\z\b\9\d\e\4\i\d\x\8\z\g\l\g\l\j\s\h\w\w\7\x\n\0\5\t\9\k\a\f\7\t\c\r\f\j\x\0\9\y\8\b\b\m\k\6\t\g\t\w\8\w\i\9\g\w\y\y\v\5\3\8\l\j\d\z\3\n\8\c\o\i\1\e\v\7\6\1\0\9\5\j\t\n\y\m\i\7\v\g\l\t\r\4\e\a\c\l\l\o\j\z\y\9\8\8\x\8\8\g\m\s\6\h\z\u\9\t\d\b\m\i\b\x\g\z\7\8\r\7\g\k\x\0\y\u\q\8\n\e\r\j\4\b\f\g\t\g\7\2\3\3\7\l\b\9\4\g\9\i\4\s\s\p\y\o\d\d\d\k\8\r\f\4\4\p\k\c\g\w\n\z\k\j\p\5\9\k\4\q\e\h\g\l\x\h\u\v\5\8\x\e\4\b\u\w\f\y\7\1\r\6\q\9\c\x\z\u\i\m\k\m\8\1\e\g\c\e\5\r\s\4\7\o\6\t\w\e\r\y\a\k\4\d\8\o\5\n\s\s\l\v\s\0\a\t\f\m\g\x\b\s\8\a\f\i\y\u\b\e\v\r\0\q\1\p\c\t\e\p\l\4\o\a\q\8\1\2\j\5\1\x\s\c\t\1\g\j\1\w\r\4\z\a\x\6\1\5\k\4\i\1\w\b\n\g\g\x\4\k\u\h\f\3\m\j\q\6\1\v\2\e\o\w\k\e\n\7\8\5\c\w\b\l\h\o\c\o\9\b\s\e\i\m\y\w\6\f\a\9\z\s\e\g\k\i\b\e\n\0\c\l\y\3\h\f\h\f\s\k\7\p\7\6\e\b\u\s\j\y\x\e\t\k\m\v\k\c\9\x\r\n\d\y\v\m\l\f\l\0\3\0\5\x\j\w\k\2\j\k\x\s\m\7\a\p\m\a\j\0\c\k\p\1\j\b\l\6\0\s\r\g\0\p\r\b\v\v\q\f\4\x\6\p\a\8\b\5\9\l\m\g\b\6\7\u\t\l\d\i\4\h\d\7\0\0\i\2\z\8\s\a\i\h\6\6\m\k\9\c\a\1\b\a\v\y\b\f\x\b\l\n\s\2\3\0\4\w\n\t\l\p\n\p\x\4\l\n\p\l\k\w\f\h\v\s\f\u\c\m\v\i\d\a\0\n\w\x\h\w\m\0\l\0\o\7\a\5\5\c\n\2\d\o\n\p\j\r\9\z\r\e\k\r\x\o\l\1\w\p\f\p\n\0\t\7\2\0\s\a\t\b\y\2\k\9\i\j\z\x\z\g\n\j\7\l\r\w\5\h\m\5\o\8\i\s\b\9\e\n\4\e\t\0\0\9\1\f\b\g\f\e\r\3\q\8\y\h\x\x\m\t\7\8\u\v\4\o\d\4\9\d\y\n\5\3\o\r\3\4\o\d\1\m\y\n\h\p\4\8\f\d\g\9\m\8\b\6\z\1\5\x\m\x\i\q\3\3\s\7\3\e\o\y\p\x\e\e\p\c\i\6\i\p\e\p\9\5\3\2\m\h\3\p\d\g\l\x\7\q\v\1\6\6\r\t\v\t\i\h\4\o\7\d\d\z\9\o\v\w\j\1\5\b\g\y\1\3\u\n\5\m\n\9\t\q\e\i\4\9\g\3\p\2\9\v\4\5\7\u\n\b\v\v\2\6\9\g\7\9\h\c\v\u\j\d\7\z\y\l\h\t\x\3\g\m\h\o\6\v\h\w\q\j\m\7\t\7\4\9\i\i\t\k\z\a\o\7\8\u\7\6\u\g\x\j\b\j\y\s\o\c\v\a\3\v\9\c\f\c\g\z\e\x\6\h\n\3\w\w\3\q\4\e\5\f\0\s\n\d\s\0\r\4\d\3\l\r\p\q\v\p\i\d\9\l\t\i\7\9\w\y\g\2\4\5\7\7\9\4\s\7\8\k\6\f\q\n\1\y\8\y\u\b\i\3\u\9\g\q\s\c\a\d\p\6\4\o\t\l\9\d\u\9\o\x\1\i\n\x\g\y\h\h\o\v\4\t\t\d\8\3\p\5\g\t\x\o\q\f\3\1\j\6\f\1\w\f\c\1\d\4\6\f\b\f\q\t\k\r\w\u\p\s\a\c\5\x\o\b\s\q\k\q\u\s\4\3\t\5\0\d\h\n\v\7\c\d\z\t\4\f\k\b\9\b\t\n\j\y\b\m\y\n\v\1\u\f\m\9\k\l\q\z\v\n\c\h\t\a\r\v\k\8\2\2\f\p\u\i\4\f\s\6\t\e\6\o\z\t\r\o\x\j\u\3\s\g\r\d\0\q\9\r\2\8\2\f\p\j\b\1\z\s\6\z\4\1\e\2\h\x\k\7\i\i\r\f\4\g\l\r\t\1\a\4\i\s\f\e\e\5\d\j\a\n\z\4\n\x\q\x\t\6\r\i\b\h\r\u\9\o\x\2\9\f\u\d\y\1\r\y\5\f\w\s\1\t\z\s\d\o\o\i\y\x\c\t\2\v\u\s\n\9\1\8\z\3\q\k\d\4\l\a\m\j\w\h\8\9\0\u\x\n\8\y\0\i\9\j\5\v\o\1\a\9\c\z\5\o\q\v\5\d\4\c\n\7\4\x\7\f\y\m\p\8\2\p\r\t\c\7\r\d\v\n\7\h\b\v\9\s\0\l\j\9\a\d\n\s\z\a\m\u\8\x\k\0\c\i\y\x\2\f\n\g\z\z\1\5\k\o\i\l\z\v\d\f\k\2\r\g\4\l\1\d\p\r\g\1\f\v\o\g\n\9\l\5\p\m\v\u\8\t\q\5\u\o\v\n\h\f\c\8\b\1\d\o\i\6\m\y\o\6\d\6\y\8\m\x\2\k\i\r\u\j\w\p\y\a\7\6\u\q\w\x\h\b\2\g\x\z\s\y\e\9\1\o\f\9\k\5\e\r\p\y\2\q\h\f\w\i\h\o\r\v\w\e\n\9\z\n\c\h\p\v\n\7\8\s\u\6\t\9\e\t\0\v\u\7\q\o\b\t\w\3\x\o\5\q\6\b\6\o\q\5\j\k\2\e\p\7\b\x\a\1\h\2\h\z\s\g\2\e\c\v\b\8\9\6\t\d\3\y\u\k\a\0\t\o\h\m\3\p\g\j\e\q\j\1\g\i\2\f\8\o\0\o\x\q\0\h\p\7\4\o\b\y\p\s\x\g\r\l\d\0\k\i\u\x\r\5\2\q\y\0\t\k\0\8\v\j\5\7\6\0\i\x\e\w\w\h\q\w\g\6\4\k\3\c\6\c\x\j\d\1\5\x\0\q\i\6\x\a\t\p\7\k\i\q\0\x\4\p\f\o\t\c\u\m\i\v\b\j\4\5\8\p\x\b\k\e\u\3\v\z\p\c\4\b\e\a\3\f\q\4\k\y\i\x\a\2\e\w\a\6\f\l\t\b\j\t\h\u\2\v\c\m\5\o\e\r\y\8\i\d\s\x\a\b\m\q\a\r\p\m\h\b\9\t\x\m\n\p\k\c\x\8\a\k\t\q\y\f\w\9\c\3\p\o\f\x\q\5\q\f\p\5\b\0\e\r\1\o\x\x\i\s\d\w\s\7\u\5\h\t\2\d\3\b\q\7\v\i\b\b\b\l\9\t\4\g\a\f\u\9\l\b\j\4\x\9\d\4\x\0\s\k\3\i\e\m\5\5\q\5\o\d\f\u\p\z\2\9\1\6\v\p\t\h\q\f\m\2\v\n\3\8\1\k\4\g\m\b\m\i\r\2\w\y\z\b\w\9\x\m\4\r\e\9\5\5\t\a\e\m\f\x\n\u\f\s\i\g\2\a\d\a\l\t\7\u\e\e\x\m\o\w\1\v\0\f\a\j\h\h\0\v\0\i\1\m\p\g\l\2\j\9\5\m\u\i\e\m\9\b\q\d\v\k\6\u\o\o\9\o\c\a\7\q\v\j\k\l\3\h\j\c\j\i\w\e\u\s\t\6\7\x\3\p\q\n\b\p\s\m\l\c\k\v\2\3\j\s\a\z\1\x\s\p\0\g\r\n\c\o\0\l\z\l\l\m\5\d\3\w\l\5\u\f\7\1\l\s\j\l\c\f\d\9\7\w\8\p\q\6\e\d\m\m\l\f\4\m\c\8\d\t\8\6\j\9\e\x\q\e\l\3\3\7\t\j\o\n\b\o\f\h\y\4\y\w\7\s\n\t\u\7\w\k\o\g\1\k\h\2\s\w\v\z\p\r\9\r\9\s\t\n\k\f\a\v\z\n\t\5\9\v\z\7\0\h\6\x\x\e\p\6\k\c\x\y\w\1\f\i\e\b\7\k\y\n\x\x\6\g\i\n\i\c\g\q\n\j\8\u\j\k\z\m\k\2\t\k\7\2\l\m\3\8\t\t\k\w\5\y\4\x\5\n\4\a\8\n\h\b\6\1\v\z\c\f\h\j\h\g\i\n\v\b\5\i\z\i\o\h\n\m\w\4\w\x\1\f\g\n\9\5\v\1\y\g\n\i\j\h\i\2\n\f\1\s\s\z\n\a\o\5\3\g\2\8\u\d\r\0\3\l\d\s\s\x\6\x\p\f\4\v\k\q\p\v\u\l\9\d\l\2\i\j\u\q\w\h\3\5\s\d\y\p\z\g\e\z\p\e\m\i\1\0\0\2\s\8\9\p\2\5\z\g\a\z\c\9\3\2\7\0\t\e\t\5\u\b\c\i\p\g\l\9\f\9\t\t\e\n\s\i\z\5\s\q\e\s\y\e\n\x\d\p\p\i\k\f\3\4\j\j\w\7\h\z\t\o\d\w\y\j\e\s\c\3\1\d\b\n\6\f\8\5\m\5\z\z\v\x\8\u\l\8\o\h\n\u\d\t\e\f\9\d\n\a\g\u\q\x\1\4\w\l\1\3\z\w\3\6\e\0\j\j\h\o\7\j\g\3\5\j\r\f\m\o\9\o\o\q\f\x\s\e\l\u\9\8\3\k\a\d\f\8\d\4\x\9\i\8\u\1\6\r\o\5\1\7\4\l\f\m\y\x\s\6\3\k\1\n\1\l\h\y\p\x\z\j\3\f\5\5\p\7\7\w\z\r\y\j\4\x\b\j\3\y\u\7\o\u\u\8\r\x\9\9\v\5\2\a\r\w\k\l\h\4\o\v\r\l\h\q\i\n\t\b\v\9\4\k\5\f\9\f\r\z\r\t\d\z\3\2\4\a\3\0\2\z\r\0\1\2\n\7\a\b\1\p\9\6\c\f\r\r\6\a\5\j\p\5\a\x\8\x\g\n\2\l\v\z\1\j\p\h\f\y\l\7\v\e\d\c\p\e\m\7\v\3\k\v\l\k\x\w\0\h\s\q\b\t\m\c\q\d\y\f\x\a\3\1\n\m\1\t\3\9\3\i\p\k\e\e\x\x\0\7\y\c\8\b\i\c\n\5\x\f\x\f\a\3\8\6\v\z\c\s\n\5\1\r\f\f\u\d\r\8\h\t\d\0\4\3\o\7\2\y\8\0\7\8\m\l\i\9\w\p\g\6\q\k\u\s\j\2\q\v\p\i\5\x\j\n\c\8\s\h\i\q\k\w\p\h\9\d\c\k\u\4\r\p\y\j\n\l\b\2\c\c\3\x\o\a\e\f\c\j\v\s\h\z\y\8\t\7\5\f\j\5\g\9\h\g\4\s\b\w\e\j\s\u\7\s\j\o\s\m\0\s\t\v\v\9\d\g\1\p\6\c\b\4\i\b\m\o\g\1\m\r\1\c\y\y\k\t\b\g\6\1\l\l\2\l\p\x\k\j\e\f\w\a\6\w\i\g\b\l\c\u\l\i\8\9\r\c\6\4\s\t\f\e\5\t\y\z\1\d\5\d\h\4\n\o\u\4\z\u\1\o\p\u\r\5\u\n\9\t\1\w\h\i\9\8\z\7\6\8\m\5\k\d\m\z\l\d\t\7\3\l\e\h\y\5\r\v\f\0\h\z\y\s\f\w\t\3\x\5\8\9\3\1\a\0\y\w\y\b\z\u\v\g\m\4\p\r\t\8\j\2 ]] 00:05:49.536 ************************************ 00:05:49.536 END TEST dd_rw_offset 00:05:49.536 ************************************ 00:05:49.536 00:05:49.536 real 0m1.387s 00:05:49.536 user 0m0.916s 00:05:49.536 sys 0m0.735s 00:05:49.536 13:44:48 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.536 13:44:48 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:05:49.536 13:44:48 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:05:49.536 13:44:48 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:05:49.536 13:44:48 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:49.536 13:44:48 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:49.536 13:44:48 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:05:49.536 13:44:48 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:49.536 13:44:48 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:05:49.536 13:44:48 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:49.536 13:44:48 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:05:49.536 13:44:48 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:49.536 13:44:48 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:49.536 [2024-12-06 13:44:48.826600] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:05:49.536 [2024-12-06 13:44:48.827610] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60001 ] 00:05:49.536 { 00:05:49.536 "subsystems": [ 00:05:49.536 { 00:05:49.536 "subsystem": "bdev", 00:05:49.536 "config": [ 00:05:49.536 { 00:05:49.536 "params": { 00:05:49.536 "trtype": "pcie", 00:05:49.536 "traddr": "0000:00:10.0", 00:05:49.536 "name": "Nvme0" 00:05:49.536 }, 00:05:49.536 "method": "bdev_nvme_attach_controller" 00:05:49.536 }, 00:05:49.536 { 00:05:49.536 "method": "bdev_wait_for_examine" 00:05:49.536 } 00:05:49.536 ] 00:05:49.536 } 00:05:49.536 ] 00:05:49.536 } 00:05:49.795 [2024-12-06 13:44:48.972892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.795 [2024-12-06 13:44:49.018989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.795 [2024-12-06 13:44:49.087251] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:50.054  [2024-12-06T13:44:49.458Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:05:50.054 00:05:50.054 13:44:49 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:50.054 ************************************ 00:05:50.054 END TEST spdk_dd_basic_rw 00:05:50.054 ************************************ 00:05:50.054 00:05:50.054 real 0m18.032s 00:05:50.054 user 0m12.498s 00:05:50.054 sys 0m8.093s 00:05:50.054 13:44:49 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:50.054 13:44:49 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:50.313 13:44:49 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:05:50.313 13:44:49 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:50.313 13:44:49 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:50.313 13:44:49 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:05:50.313 ************************************ 00:05:50.313 START TEST spdk_dd_posix 00:05:50.313 ************************************ 00:05:50.313 13:44:49 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:05:50.313 * Looking for test storage... 00:05:50.313 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:05:50.313 13:44:49 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:50.313 13:44:49 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1711 -- # lcov --version 00:05:50.313 13:44:49 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:50.313 13:44:49 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:50.313 13:44:49 spdk_dd.spdk_dd_posix -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:50.313 13:44:49 spdk_dd.spdk_dd_posix -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:50.313 13:44:49 spdk_dd.spdk_dd_posix -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:50.313 13:44:49 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # IFS=.-: 00:05:50.313 13:44:49 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # read -ra ver1 00:05:50.313 13:44:49 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # IFS=.-: 00:05:50.313 13:44:49 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # read -ra ver2 00:05:50.313 13:44:49 spdk_dd.spdk_dd_posix -- scripts/common.sh@338 -- # local 'op=<' 00:05:50.313 13:44:49 spdk_dd.spdk_dd_posix -- scripts/common.sh@340 -- # ver1_l=2 00:05:50.313 13:44:49 spdk_dd.spdk_dd_posix -- scripts/common.sh@341 -- # ver2_l=1 00:05:50.313 13:44:49 spdk_dd.spdk_dd_posix -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:50.313 13:44:49 spdk_dd.spdk_dd_posix -- scripts/common.sh@344 -- # case "$op" in 00:05:50.313 13:44:49 spdk_dd.spdk_dd_posix -- scripts/common.sh@345 -- # : 1 00:05:50.313 13:44:49 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:50.313 13:44:49 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:50.313 13:44:49 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # decimal 1 00:05:50.313 13:44:49 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=1 00:05:50.313 13:44:49 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:50.313 13:44:49 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 1 00:05:50.313 13:44:49 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # ver1[v]=1 00:05:50.313 13:44:49 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # decimal 2 00:05:50.313 13:44:49 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=2 00:05:50.313 13:44:49 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:50.313 13:44:49 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 2 00:05:50.313 13:44:49 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # ver2[v]=2 00:05:50.313 13:44:49 spdk_dd.spdk_dd_posix -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:50.313 13:44:49 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:50.313 13:44:49 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # return 0 00:05:50.313 13:44:49 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:50.313 13:44:49 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:50.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.313 --rc genhtml_branch_coverage=1 00:05:50.313 --rc genhtml_function_coverage=1 00:05:50.313 --rc genhtml_legend=1 00:05:50.313 --rc geninfo_all_blocks=1 00:05:50.313 --rc geninfo_unexecuted_blocks=1 00:05:50.313 00:05:50.313 ' 00:05:50.313 13:44:49 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:50.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.313 --rc genhtml_branch_coverage=1 00:05:50.313 --rc genhtml_function_coverage=1 00:05:50.313 --rc genhtml_legend=1 00:05:50.313 --rc geninfo_all_blocks=1 00:05:50.313 --rc geninfo_unexecuted_blocks=1 00:05:50.313 00:05:50.313 ' 00:05:50.313 13:44:49 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:50.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.313 --rc genhtml_branch_coverage=1 00:05:50.313 --rc genhtml_function_coverage=1 00:05:50.313 --rc genhtml_legend=1 00:05:50.313 --rc geninfo_all_blocks=1 00:05:50.313 --rc geninfo_unexecuted_blocks=1 00:05:50.313 00:05:50.313 ' 00:05:50.313 13:44:49 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:50.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.314 --rc genhtml_branch_coverage=1 00:05:50.314 --rc genhtml_function_coverage=1 00:05:50.314 --rc genhtml_legend=1 00:05:50.314 --rc geninfo_all_blocks=1 00:05:50.314 --rc geninfo_unexecuted_blocks=1 00:05:50.314 00:05:50.314 ' 00:05:50.314 13:44:49 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:50.314 13:44:49 spdk_dd.spdk_dd_posix -- scripts/common.sh@15 -- # shopt -s extglob 00:05:50.314 13:44:49 spdk_dd.spdk_dd_posix -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:50.314 13:44:49 spdk_dd.spdk_dd_posix -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:50.314 13:44:49 spdk_dd.spdk_dd_posix -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:50.314 13:44:49 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.314 13:44:49 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.314 13:44:49 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.314 13:44:49 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:05:50.314 13:44:49 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.314 13:44:49 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:05:50.314 13:44:49 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:05:50.314 13:44:49 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:05:50.314 13:44:49 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:05:50.314 13:44:49 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:50.314 13:44:49 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:50.314 13:44:49 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:05:50.314 13:44:49 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:05:50.314 * First test run, liburing in use 00:05:50.314 13:44:49 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:05:50.314 13:44:49 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:50.314 13:44:49 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:50.314 13:44:49 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:05:50.314 ************************************ 00:05:50.314 START TEST dd_flag_append 00:05:50.314 ************************************ 00:05:50.314 13:44:49 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1129 -- # append 00:05:50.314 13:44:49 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:05:50.314 13:44:49 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:05:50.314 13:44:49 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:05:50.314 13:44:49 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:05:50.314 13:44:49 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:05:50.314 13:44:49 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=7lncymggd58ulcw59hcoua05defv4zlg 00:05:50.314 13:44:49 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:05:50.314 13:44:49 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:05:50.314 13:44:49 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:05:50.572 13:44:49 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=gck534etn51k6s72bx46ao6cpt51vxm6 00:05:50.572 13:44:49 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s 7lncymggd58ulcw59hcoua05defv4zlg 00:05:50.572 13:44:49 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s gck534etn51k6s72bx46ao6cpt51vxm6 00:05:50.572 13:44:49 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:05:50.572 [2024-12-06 13:44:49.772626] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:05:50.572 [2024-12-06 13:44:49.772767] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60073 ] 00:05:50.572 [2024-12-06 13:44:49.916931] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.572 [2024-12-06 13:44:49.965011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.829 [2024-12-06 13:44:50.034818] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:50.829  [2024-12-06T13:44:50.492Z] Copying: 32/32 [B] (average 31 kBps) 00:05:51.088 00:05:51.088 13:44:50 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ gck534etn51k6s72bx46ao6cpt51vxm67lncymggd58ulcw59hcoua05defv4zlg == \g\c\k\5\3\4\e\t\n\5\1\k\6\s\7\2\b\x\4\6\a\o\6\c\p\t\5\1\v\x\m\6\7\l\n\c\y\m\g\g\d\5\8\u\l\c\w\5\9\h\c\o\u\a\0\5\d\e\f\v\4\z\l\g ]] 00:05:51.088 00:05:51.088 real 0m0.600s 00:05:51.088 user 0m0.322s 00:05:51.088 sys 0m0.343s 00:05:51.088 ************************************ 00:05:51.088 END TEST dd_flag_append 00:05:51.088 ************************************ 00:05:51.088 13:44:50 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:51.088 13:44:50 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:05:51.088 13:44:50 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:05:51.088 13:44:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:51.088 13:44:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.088 13:44:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:05:51.088 ************************************ 00:05:51.088 START TEST dd_flag_directory 00:05:51.088 ************************************ 00:05:51.088 13:44:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1129 -- # directory 00:05:51.088 13:44:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:51.088 13:44:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 00:05:51.088 13:44:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:51.088 13:44:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:51.088 13:44:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:51.088 13:44:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:51.088 13:44:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:51.088 13:44:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:51.088 13:44:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:51.088 13:44:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:51.088 13:44:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:05:51.088 13:44:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:51.088 [2024-12-06 13:44:50.424227] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:05:51.088 [2024-12-06 13:44:50.424340] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60096 ] 00:05:51.346 [2024-12-06 13:44:50.569681] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.347 [2024-12-06 13:44:50.643212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.347 [2024-12-06 13:44:50.710852] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:51.605 [2024-12-06 13:44:50.757186] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:05:51.605 [2024-12-06 13:44:50.757241] spdk_dd.c:1081:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:05:51.606 [2024-12-06 13:44:50.757253] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:51.606 [2024-12-06 13:44:50.916087] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:05:51.606 13:44:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 00:05:51.606 13:44:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:51.606 13:44:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 00:05:51.606 13:44:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 00:05:51.606 13:44:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 00:05:51.606 13:44:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:51.606 13:44:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:05:51.606 13:44:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 00:05:51.606 13:44:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:05:51.606 13:44:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:51.606 13:44:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:51.606 13:44:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:51.606 13:44:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:51.606 13:44:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:51.606 13:44:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:51.606 13:44:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:51.606 13:44:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:05:51.606 13:44:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:05:51.865 [2024-12-06 13:44:51.039039] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:05:51.865 [2024-12-06 13:44:51.039174] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60111 ] 00:05:51.865 [2024-12-06 13:44:51.184374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.865 [2024-12-06 13:44:51.229693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.124 [2024-12-06 13:44:51.298143] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:52.124 [2024-12-06 13:44:51.343435] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:05:52.124 [2024-12-06 13:44:51.343491] spdk_dd.c:1130:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:05:52.124 [2024-12-06 13:44:51.343506] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:52.124 [2024-12-06 13:44:51.500309] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:05:52.383 13:44:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 00:05:52.383 13:44:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:52.383 13:44:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 00:05:52.383 13:44:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 00:05:52.383 13:44:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 00:05:52.383 13:44:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:52.383 00:05:52.383 real 0m1.206s 00:05:52.383 user 0m0.661s 00:05:52.383 sys 0m0.333s 00:05:52.383 ************************************ 00:05:52.383 END TEST dd_flag_directory 00:05:52.383 ************************************ 00:05:52.383 13:44:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:52.383 13:44:51 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:05:52.383 13:44:51 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:05:52.383 13:44:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:52.383 13:44:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:52.383 13:44:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:05:52.383 ************************************ 00:05:52.383 START TEST dd_flag_nofollow 00:05:52.383 ************************************ 00:05:52.383 13:44:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1129 -- # nofollow 00:05:52.383 13:44:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:05:52.383 13:44:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:05:52.383 13:44:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:05:52.383 13:44:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:05:52.383 13:44:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:52.383 13:44:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 00:05:52.383 13:44:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:52.383 13:44:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:52.383 13:44:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:52.383 13:44:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:52.383 13:44:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:52.383 13:44:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:52.383 13:44:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:52.383 13:44:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:52.383 13:44:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:05:52.383 13:44:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:52.383 [2024-12-06 13:44:51.683778] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:05:52.383 [2024-12-06 13:44:51.683866] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60145 ] 00:05:52.642 [2024-12-06 13:44:51.828617] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.642 [2024-12-06 13:44:51.884649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.642 [2024-12-06 13:44:51.953349] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:52.642 [2024-12-06 13:44:51.998481] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:05:52.642 [2024-12-06 13:44:51.998538] spdk_dd.c:1081:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:05:52.642 [2024-12-06 13:44:51.998552] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:52.902 [2024-12-06 13:44:52.158978] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:05:52.902 13:44:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 00:05:52.902 13:44:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:52.902 13:44:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 00:05:52.902 13:44:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 00:05:52.902 13:44:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 00:05:52.902 13:44:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:52.902 13:44:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:05:52.902 13:44:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 00:05:52.902 13:44:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:05:52.902 13:44:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:52.902 13:44:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:52.902 13:44:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:52.902 13:44:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:52.902 13:44:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:52.902 13:44:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:52.902 13:44:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:52.902 13:44:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:05:52.902 13:44:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:05:52.902 [2024-12-06 13:44:52.284902] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:05:52.902 [2024-12-06 13:44:52.284994] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60151 ] 00:05:53.161 [2024-12-06 13:44:52.428305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.161 [2024-12-06 13:44:52.486312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.161 [2024-12-06 13:44:52.558748] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:53.420 [2024-12-06 13:44:52.606479] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:05:53.420 [2024-12-06 13:44:52.606519] spdk_dd.c:1130:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:05:53.420 [2024-12-06 13:44:52.606532] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:53.420 [2024-12-06 13:44:52.763789] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:05:53.679 13:44:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 00:05:53.680 13:44:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:53.680 13:44:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 00:05:53.680 13:44:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 00:05:53.680 13:44:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 00:05:53.680 13:44:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:53.680 13:44:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:05:53.680 13:44:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:05:53.680 13:44:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:05:53.680 13:44:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:53.680 [2024-12-06 13:44:52.884361] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:05:53.680 [2024-12-06 13:44:52.884458] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60164 ] 00:05:53.680 [2024-12-06 13:44:53.028684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.680 [2024-12-06 13:44:53.071022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.939 [2024-12-06 13:44:53.139438] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:53.939  [2024-12-06T13:44:53.602Z] Copying: 512/512 [B] (average 500 kBps) 00:05:54.198 00:05:54.198 13:44:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ bljzmanxnk7cvqrdrb4lw5ik7ri7u2lw60yzj9xhm6t7mrvt93l14zclyt4ks4mdrpdm6q6ro3c3aaupercamp3mf66pr20ipv3ie8693zgubx2di1hk0lnbp51ry9g9vxanm1peyi1jj7qerg1o6us2xpog0yh2tnklwz1ikuyn9pslwywvsgmu13nvrmljmru8h8jjatfdrjcfpsjog6qphil53w1mgr7zbcoj2c0k1truzp8nsqg82xsrpzpea44ipse04e2kn9vscxrwa7p6da0114oj4mfccw81yojx3tjcdgmak657p6r32ssvbszqg4xxrj4yeht3xpghgyqn4z285b2yttoftahd452986bhasrnt8nsg1lf8utd01nirz5xpg07anmf48s7owfgo64k808q1fdi7c3tfr4giwlwcrqzhjydbgahsubeq4osg15zjxabnpsckxxm7enehsogaaah02gng85nt777bqwbgh1jbw4oa31sveee == \b\l\j\z\m\a\n\x\n\k\7\c\v\q\r\d\r\b\4\l\w\5\i\k\7\r\i\7\u\2\l\w\6\0\y\z\j\9\x\h\m\6\t\7\m\r\v\t\9\3\l\1\4\z\c\l\y\t\4\k\s\4\m\d\r\p\d\m\6\q\6\r\o\3\c\3\a\a\u\p\e\r\c\a\m\p\3\m\f\6\6\p\r\2\0\i\p\v\3\i\e\8\6\9\3\z\g\u\b\x\2\d\i\1\h\k\0\l\n\b\p\5\1\r\y\9\g\9\v\x\a\n\m\1\p\e\y\i\1\j\j\7\q\e\r\g\1\o\6\u\s\2\x\p\o\g\0\y\h\2\t\n\k\l\w\z\1\i\k\u\y\n\9\p\s\l\w\y\w\v\s\g\m\u\1\3\n\v\r\m\l\j\m\r\u\8\h\8\j\j\a\t\f\d\r\j\c\f\p\s\j\o\g\6\q\p\h\i\l\5\3\w\1\m\g\r\7\z\b\c\o\j\2\c\0\k\1\t\r\u\z\p\8\n\s\q\g\8\2\x\s\r\p\z\p\e\a\4\4\i\p\s\e\0\4\e\2\k\n\9\v\s\c\x\r\w\a\7\p\6\d\a\0\1\1\4\o\j\4\m\f\c\c\w\8\1\y\o\j\x\3\t\j\c\d\g\m\a\k\6\5\7\p\6\r\3\2\s\s\v\b\s\z\q\g\4\x\x\r\j\4\y\e\h\t\3\x\p\g\h\g\y\q\n\4\z\2\8\5\b\2\y\t\t\o\f\t\a\h\d\4\5\2\9\8\6\b\h\a\s\r\n\t\8\n\s\g\1\l\f\8\u\t\d\0\1\n\i\r\z\5\x\p\g\0\7\a\n\m\f\4\8\s\7\o\w\f\g\o\6\4\k\8\0\8\q\1\f\d\i\7\c\3\t\f\r\4\g\i\w\l\w\c\r\q\z\h\j\y\d\b\g\a\h\s\u\b\e\q\4\o\s\g\1\5\z\j\x\a\b\n\p\s\c\k\x\x\m\7\e\n\e\h\s\o\g\a\a\a\h\0\2\g\n\g\8\5\n\t\7\7\7\b\q\w\b\g\h\1\j\b\w\4\o\a\3\1\s\v\e\e\e ]] 00:05:54.198 00:05:54.198 real 0m1.793s 00:05:54.198 user 0m0.973s 00:05:54.198 sys 0m0.673s 00:05:54.198 ************************************ 00:05:54.198 END TEST dd_flag_nofollow 00:05:54.198 ************************************ 00:05:54.198 13:44:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:54.198 13:44:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:05:54.198 13:44:53 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:05:54.198 13:44:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:54.198 13:44:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:54.198 13:44:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:05:54.198 ************************************ 00:05:54.198 START TEST dd_flag_noatime 00:05:54.198 ************************************ 00:05:54.198 13:44:53 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1129 -- # noatime 00:05:54.198 13:44:53 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:05:54.198 13:44:53 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:05:54.198 13:44:53 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:05:54.198 13:44:53 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:05:54.198 13:44:53 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:05:54.198 13:44:53 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:54.198 13:44:53 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1733492693 00:05:54.198 13:44:53 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:54.198 13:44:53 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1733492693 00:05:54.198 13:44:53 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:05:55.159 13:44:54 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:55.159 [2024-12-06 13:44:54.553677] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:05:55.159 [2024-12-06 13:44:54.553824] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60201 ] 00:05:55.418 [2024-12-06 13:44:54.705864] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.418 [2024-12-06 13:44:54.773457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.677 [2024-12-06 13:44:54.852666] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:55.677  [2024-12-06T13:44:55.339Z] Copying: 512/512 [B] (average 500 kBps) 00:05:55.935 00:05:55.935 13:44:55 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:55.935 13:44:55 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1733492693 )) 00:05:55.935 13:44:55 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:55.935 13:44:55 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1733492693 )) 00:05:55.935 13:44:55 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:55.935 [2024-12-06 13:44:55.198801] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:05:55.935 [2024-12-06 13:44:55.198907] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60221 ] 00:05:56.193 [2024-12-06 13:44:55.342598] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.193 [2024-12-06 13:44:55.392435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.193 [2024-12-06 13:44:55.463529] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:56.193  [2024-12-06T13:44:55.856Z] Copying: 512/512 [B] (average 500 kBps) 00:05:56.452 00:05:56.452 13:44:55 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:56.452 13:44:55 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1733492695 )) 00:05:56.452 00:05:56.452 real 0m2.271s 00:05:56.452 user 0m0.680s 00:05:56.452 sys 0m0.726s 00:05:56.452 13:44:55 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:56.452 13:44:55 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:05:56.452 ************************************ 00:05:56.452 END TEST dd_flag_noatime 00:05:56.452 ************************************ 00:05:56.452 13:44:55 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:05:56.452 13:44:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:56.452 13:44:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:56.452 13:44:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:05:56.452 ************************************ 00:05:56.452 START TEST dd_flags_misc 00:05:56.452 ************************************ 00:05:56.452 13:44:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1129 -- # io 00:05:56.452 13:44:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:05:56.452 13:44:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:05:56.452 13:44:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:05:56.452 13:44:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:05:56.452 13:44:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:05:56.452 13:44:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:05:56.452 13:44:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:05:56.452 13:44:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:05:56.452 13:44:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:05:56.452 [2024-12-06 13:44:55.848409] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:05:56.452 [2024-12-06 13:44:55.848506] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60255 ] 00:05:56.711 [2024-12-06 13:44:55.992615] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.711 [2024-12-06 13:44:56.038320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.711 [2024-12-06 13:44:56.106821] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:56.971  [2024-12-06T13:44:56.634Z] Copying: 512/512 [B] (average 500 kBps) 00:05:57.231 00:05:57.231 13:44:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ gij0i6nh4mkbohrvt7ipzfg7aa0wpj5b8eqaem9ya302xu66yki6mvmj3vk5825mrgywfkmlv00e666j7xchfmtj90rxcvsid5dgmnare47tes4ppbjrlsdhenrnrpmebphtg3e71mw2lfq5g0c0fd9qeh0cnakjeconbwi8zs86jjux5cp1ryrjyvves9fxw1ce1qd4rsdydj4r75vcp6l8cxjoaqmh9pnowwu7g9cl6dw49h5xrj8ypsm52jk61mqu8hzxgfxfd6p2t3wldh62praod4q9k4j8y1ajj05keby9nl2jjnc6t3hr3i8l8vsu671w588m1nuh81fkddm1y65y7cktlschz9dzzaiubjzv741u1ndtl3cdxx1qaakrelwrk2tvmg0tvjbddkr7risgsqs4y19150yg74zxvqcy39p9m2gqb2arc9o2hpv6v2vb3haysq1r0qev63ormfdrb56ipdnjbjk3wd3yjt8l2jkp3s8yh6bux3b2 == \g\i\j\0\i\6\n\h\4\m\k\b\o\h\r\v\t\7\i\p\z\f\g\7\a\a\0\w\p\j\5\b\8\e\q\a\e\m\9\y\a\3\0\2\x\u\6\6\y\k\i\6\m\v\m\j\3\v\k\5\8\2\5\m\r\g\y\w\f\k\m\l\v\0\0\e\6\6\6\j\7\x\c\h\f\m\t\j\9\0\r\x\c\v\s\i\d\5\d\g\m\n\a\r\e\4\7\t\e\s\4\p\p\b\j\r\l\s\d\h\e\n\r\n\r\p\m\e\b\p\h\t\g\3\e\7\1\m\w\2\l\f\q\5\g\0\c\0\f\d\9\q\e\h\0\c\n\a\k\j\e\c\o\n\b\w\i\8\z\s\8\6\j\j\u\x\5\c\p\1\r\y\r\j\y\v\v\e\s\9\f\x\w\1\c\e\1\q\d\4\r\s\d\y\d\j\4\r\7\5\v\c\p\6\l\8\c\x\j\o\a\q\m\h\9\p\n\o\w\w\u\7\g\9\c\l\6\d\w\4\9\h\5\x\r\j\8\y\p\s\m\5\2\j\k\6\1\m\q\u\8\h\z\x\g\f\x\f\d\6\p\2\t\3\w\l\d\h\6\2\p\r\a\o\d\4\q\9\k\4\j\8\y\1\a\j\j\0\5\k\e\b\y\9\n\l\2\j\j\n\c\6\t\3\h\r\3\i\8\l\8\v\s\u\6\7\1\w\5\8\8\m\1\n\u\h\8\1\f\k\d\d\m\1\y\6\5\y\7\c\k\t\l\s\c\h\z\9\d\z\z\a\i\u\b\j\z\v\7\4\1\u\1\n\d\t\l\3\c\d\x\x\1\q\a\a\k\r\e\l\w\r\k\2\t\v\m\g\0\t\v\j\b\d\d\k\r\7\r\i\s\g\s\q\s\4\y\1\9\1\5\0\y\g\7\4\z\x\v\q\c\y\3\9\p\9\m\2\g\q\b\2\a\r\c\9\o\2\h\p\v\6\v\2\v\b\3\h\a\y\s\q\1\r\0\q\e\v\6\3\o\r\m\f\d\r\b\5\6\i\p\d\n\j\b\j\k\3\w\d\3\y\j\t\8\l\2\j\k\p\3\s\8\y\h\6\b\u\x\3\b\2 ]] 00:05:57.231 13:44:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:05:57.231 13:44:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:05:57.231 [2024-12-06 13:44:56.420754] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:05:57.231 [2024-12-06 13:44:56.420847] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60259 ] 00:05:57.231 [2024-12-06 13:44:56.560696] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.231 [2024-12-06 13:44:56.606688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.490 [2024-12-06 13:44:56.678231] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:57.490  [2024-12-06T13:44:57.153Z] Copying: 512/512 [B] (average 500 kBps) 00:05:57.749 00:05:57.749 13:44:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ gij0i6nh4mkbohrvt7ipzfg7aa0wpj5b8eqaem9ya302xu66yki6mvmj3vk5825mrgywfkmlv00e666j7xchfmtj90rxcvsid5dgmnare47tes4ppbjrlsdhenrnrpmebphtg3e71mw2lfq5g0c0fd9qeh0cnakjeconbwi8zs86jjux5cp1ryrjyvves9fxw1ce1qd4rsdydj4r75vcp6l8cxjoaqmh9pnowwu7g9cl6dw49h5xrj8ypsm52jk61mqu8hzxgfxfd6p2t3wldh62praod4q9k4j8y1ajj05keby9nl2jjnc6t3hr3i8l8vsu671w588m1nuh81fkddm1y65y7cktlschz9dzzaiubjzv741u1ndtl3cdxx1qaakrelwrk2tvmg0tvjbddkr7risgsqs4y19150yg74zxvqcy39p9m2gqb2arc9o2hpv6v2vb3haysq1r0qev63ormfdrb56ipdnjbjk3wd3yjt8l2jkp3s8yh6bux3b2 == \g\i\j\0\i\6\n\h\4\m\k\b\o\h\r\v\t\7\i\p\z\f\g\7\a\a\0\w\p\j\5\b\8\e\q\a\e\m\9\y\a\3\0\2\x\u\6\6\y\k\i\6\m\v\m\j\3\v\k\5\8\2\5\m\r\g\y\w\f\k\m\l\v\0\0\e\6\6\6\j\7\x\c\h\f\m\t\j\9\0\r\x\c\v\s\i\d\5\d\g\m\n\a\r\e\4\7\t\e\s\4\p\p\b\j\r\l\s\d\h\e\n\r\n\r\p\m\e\b\p\h\t\g\3\e\7\1\m\w\2\l\f\q\5\g\0\c\0\f\d\9\q\e\h\0\c\n\a\k\j\e\c\o\n\b\w\i\8\z\s\8\6\j\j\u\x\5\c\p\1\r\y\r\j\y\v\v\e\s\9\f\x\w\1\c\e\1\q\d\4\r\s\d\y\d\j\4\r\7\5\v\c\p\6\l\8\c\x\j\o\a\q\m\h\9\p\n\o\w\w\u\7\g\9\c\l\6\d\w\4\9\h\5\x\r\j\8\y\p\s\m\5\2\j\k\6\1\m\q\u\8\h\z\x\g\f\x\f\d\6\p\2\t\3\w\l\d\h\6\2\p\r\a\o\d\4\q\9\k\4\j\8\y\1\a\j\j\0\5\k\e\b\y\9\n\l\2\j\j\n\c\6\t\3\h\r\3\i\8\l\8\v\s\u\6\7\1\w\5\8\8\m\1\n\u\h\8\1\f\k\d\d\m\1\y\6\5\y\7\c\k\t\l\s\c\h\z\9\d\z\z\a\i\u\b\j\z\v\7\4\1\u\1\n\d\t\l\3\c\d\x\x\1\q\a\a\k\r\e\l\w\r\k\2\t\v\m\g\0\t\v\j\b\d\d\k\r\7\r\i\s\g\s\q\s\4\y\1\9\1\5\0\y\g\7\4\z\x\v\q\c\y\3\9\p\9\m\2\g\q\b\2\a\r\c\9\o\2\h\p\v\6\v\2\v\b\3\h\a\y\s\q\1\r\0\q\e\v\6\3\o\r\m\f\d\r\b\5\6\i\p\d\n\j\b\j\k\3\w\d\3\y\j\t\8\l\2\j\k\p\3\s\8\y\h\6\b\u\x\3\b\2 ]] 00:05:57.749 13:44:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:05:57.749 13:44:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:05:57.749 [2024-12-06 13:44:56.995130] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:05:57.749 [2024-12-06 13:44:56.995240] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60274 ] 00:05:57.749 [2024-12-06 13:44:57.135713] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.008 [2024-12-06 13:44:57.189455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.008 [2024-12-06 13:44:57.260642] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:58.008  [2024-12-06T13:44:57.670Z] Copying: 512/512 [B] (average 166 kBps) 00:05:58.267 00:05:58.267 13:44:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ gij0i6nh4mkbohrvt7ipzfg7aa0wpj5b8eqaem9ya302xu66yki6mvmj3vk5825mrgywfkmlv00e666j7xchfmtj90rxcvsid5dgmnare47tes4ppbjrlsdhenrnrpmebphtg3e71mw2lfq5g0c0fd9qeh0cnakjeconbwi8zs86jjux5cp1ryrjyvves9fxw1ce1qd4rsdydj4r75vcp6l8cxjoaqmh9pnowwu7g9cl6dw49h5xrj8ypsm52jk61mqu8hzxgfxfd6p2t3wldh62praod4q9k4j8y1ajj05keby9nl2jjnc6t3hr3i8l8vsu671w588m1nuh81fkddm1y65y7cktlschz9dzzaiubjzv741u1ndtl3cdxx1qaakrelwrk2tvmg0tvjbddkr7risgsqs4y19150yg74zxvqcy39p9m2gqb2arc9o2hpv6v2vb3haysq1r0qev63ormfdrb56ipdnjbjk3wd3yjt8l2jkp3s8yh6bux3b2 == \g\i\j\0\i\6\n\h\4\m\k\b\o\h\r\v\t\7\i\p\z\f\g\7\a\a\0\w\p\j\5\b\8\e\q\a\e\m\9\y\a\3\0\2\x\u\6\6\y\k\i\6\m\v\m\j\3\v\k\5\8\2\5\m\r\g\y\w\f\k\m\l\v\0\0\e\6\6\6\j\7\x\c\h\f\m\t\j\9\0\r\x\c\v\s\i\d\5\d\g\m\n\a\r\e\4\7\t\e\s\4\p\p\b\j\r\l\s\d\h\e\n\r\n\r\p\m\e\b\p\h\t\g\3\e\7\1\m\w\2\l\f\q\5\g\0\c\0\f\d\9\q\e\h\0\c\n\a\k\j\e\c\o\n\b\w\i\8\z\s\8\6\j\j\u\x\5\c\p\1\r\y\r\j\y\v\v\e\s\9\f\x\w\1\c\e\1\q\d\4\r\s\d\y\d\j\4\r\7\5\v\c\p\6\l\8\c\x\j\o\a\q\m\h\9\p\n\o\w\w\u\7\g\9\c\l\6\d\w\4\9\h\5\x\r\j\8\y\p\s\m\5\2\j\k\6\1\m\q\u\8\h\z\x\g\f\x\f\d\6\p\2\t\3\w\l\d\h\6\2\p\r\a\o\d\4\q\9\k\4\j\8\y\1\a\j\j\0\5\k\e\b\y\9\n\l\2\j\j\n\c\6\t\3\h\r\3\i\8\l\8\v\s\u\6\7\1\w\5\8\8\m\1\n\u\h\8\1\f\k\d\d\m\1\y\6\5\y\7\c\k\t\l\s\c\h\z\9\d\z\z\a\i\u\b\j\z\v\7\4\1\u\1\n\d\t\l\3\c\d\x\x\1\q\a\a\k\r\e\l\w\r\k\2\t\v\m\g\0\t\v\j\b\d\d\k\r\7\r\i\s\g\s\q\s\4\y\1\9\1\5\0\y\g\7\4\z\x\v\q\c\y\3\9\p\9\m\2\g\q\b\2\a\r\c\9\o\2\h\p\v\6\v\2\v\b\3\h\a\y\s\q\1\r\0\q\e\v\6\3\o\r\m\f\d\r\b\5\6\i\p\d\n\j\b\j\k\3\w\d\3\y\j\t\8\l\2\j\k\p\3\s\8\y\h\6\b\u\x\3\b\2 ]] 00:05:58.267 13:44:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:05:58.267 13:44:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:05:58.267 [2024-12-06 13:44:57.577548] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:05:58.267 [2024-12-06 13:44:57.577653] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60278 ] 00:05:58.526 [2024-12-06 13:44:57.717010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.526 [2024-12-06 13:44:57.759446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.526 [2024-12-06 13:44:57.829202] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:58.526  [2024-12-06T13:44:58.210Z] Copying: 512/512 [B] (average 250 kBps) 00:05:58.806 00:05:58.806 13:44:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ gij0i6nh4mkbohrvt7ipzfg7aa0wpj5b8eqaem9ya302xu66yki6mvmj3vk5825mrgywfkmlv00e666j7xchfmtj90rxcvsid5dgmnare47tes4ppbjrlsdhenrnrpmebphtg3e71mw2lfq5g0c0fd9qeh0cnakjeconbwi8zs86jjux5cp1ryrjyvves9fxw1ce1qd4rsdydj4r75vcp6l8cxjoaqmh9pnowwu7g9cl6dw49h5xrj8ypsm52jk61mqu8hzxgfxfd6p2t3wldh62praod4q9k4j8y1ajj05keby9nl2jjnc6t3hr3i8l8vsu671w588m1nuh81fkddm1y65y7cktlschz9dzzaiubjzv741u1ndtl3cdxx1qaakrelwrk2tvmg0tvjbddkr7risgsqs4y19150yg74zxvqcy39p9m2gqb2arc9o2hpv6v2vb3haysq1r0qev63ormfdrb56ipdnjbjk3wd3yjt8l2jkp3s8yh6bux3b2 == \g\i\j\0\i\6\n\h\4\m\k\b\o\h\r\v\t\7\i\p\z\f\g\7\a\a\0\w\p\j\5\b\8\e\q\a\e\m\9\y\a\3\0\2\x\u\6\6\y\k\i\6\m\v\m\j\3\v\k\5\8\2\5\m\r\g\y\w\f\k\m\l\v\0\0\e\6\6\6\j\7\x\c\h\f\m\t\j\9\0\r\x\c\v\s\i\d\5\d\g\m\n\a\r\e\4\7\t\e\s\4\p\p\b\j\r\l\s\d\h\e\n\r\n\r\p\m\e\b\p\h\t\g\3\e\7\1\m\w\2\l\f\q\5\g\0\c\0\f\d\9\q\e\h\0\c\n\a\k\j\e\c\o\n\b\w\i\8\z\s\8\6\j\j\u\x\5\c\p\1\r\y\r\j\y\v\v\e\s\9\f\x\w\1\c\e\1\q\d\4\r\s\d\y\d\j\4\r\7\5\v\c\p\6\l\8\c\x\j\o\a\q\m\h\9\p\n\o\w\w\u\7\g\9\c\l\6\d\w\4\9\h\5\x\r\j\8\y\p\s\m\5\2\j\k\6\1\m\q\u\8\h\z\x\g\f\x\f\d\6\p\2\t\3\w\l\d\h\6\2\p\r\a\o\d\4\q\9\k\4\j\8\y\1\a\j\j\0\5\k\e\b\y\9\n\l\2\j\j\n\c\6\t\3\h\r\3\i\8\l\8\v\s\u\6\7\1\w\5\8\8\m\1\n\u\h\8\1\f\k\d\d\m\1\y\6\5\y\7\c\k\t\l\s\c\h\z\9\d\z\z\a\i\u\b\j\z\v\7\4\1\u\1\n\d\t\l\3\c\d\x\x\1\q\a\a\k\r\e\l\w\r\k\2\t\v\m\g\0\t\v\j\b\d\d\k\r\7\r\i\s\g\s\q\s\4\y\1\9\1\5\0\y\g\7\4\z\x\v\q\c\y\3\9\p\9\m\2\g\q\b\2\a\r\c\9\o\2\h\p\v\6\v\2\v\b\3\h\a\y\s\q\1\r\0\q\e\v\6\3\o\r\m\f\d\r\b\5\6\i\p\d\n\j\b\j\k\3\w\d\3\y\j\t\8\l\2\j\k\p\3\s\8\y\h\6\b\u\x\3\b\2 ]] 00:05:58.806 13:44:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:05:58.806 13:44:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:05:58.806 13:44:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:05:58.806 13:44:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:05:58.806 13:44:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:05:58.806 13:44:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:05:58.806 [2024-12-06 13:44:58.157672] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:05:58.806 [2024-12-06 13:44:58.157764] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60293 ] 00:05:59.065 [2024-12-06 13:44:58.299339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.065 [2024-12-06 13:44:58.350946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.065 [2024-12-06 13:44:58.423987] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:59.325  [2024-12-06T13:44:58.729Z] Copying: 512/512 [B] (average 500 kBps) 00:05:59.325 00:05:59.325 13:44:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ jyr4tkaysplba4vt5gqhw3tbg9ckzpnejnhfd70fjx4x9okbpri71364kmtmh4rvvyqu8qqh5wj8ktc7i99wwvz8vjxxgejgeodbxwf74uoji5sl6erfmiaesm8kjpwxo8ybras1yt26fywobf1i2vseyf5q939su8gipcdkm0bo8nvkeg3w36lvop1f8irliny8ti53ufvqtq9nbl7kr6kpyf2xige5ij7rggi62wpyw90jmuny8vuiga2gvivf359j9fawjbr74ywvbifsukchxna3rpvziaxyadk309pi7itrrpl90va11l0evb030xv3nh8f7tuksqhtn9muak4m1nsvrelvstklt03atvhvg4cf4n59rcso1t6wtfpd73xudu36twdquuxy7qgyhw8yjcgs46s57b0m6flwkdu1y517266c2bymu7x7wtzwgdj8kkxuhzm8a0cxackynrcrzxb2hsrnpwa7d0rx9rgprw8dqltih5duqfrf15aw == \j\y\r\4\t\k\a\y\s\p\l\b\a\4\v\t\5\g\q\h\w\3\t\b\g\9\c\k\z\p\n\e\j\n\h\f\d\7\0\f\j\x\4\x\9\o\k\b\p\r\i\7\1\3\6\4\k\m\t\m\h\4\r\v\v\y\q\u\8\q\q\h\5\w\j\8\k\t\c\7\i\9\9\w\w\v\z\8\v\j\x\x\g\e\j\g\e\o\d\b\x\w\f\7\4\u\o\j\i\5\s\l\6\e\r\f\m\i\a\e\s\m\8\k\j\p\w\x\o\8\y\b\r\a\s\1\y\t\2\6\f\y\w\o\b\f\1\i\2\v\s\e\y\f\5\q\9\3\9\s\u\8\g\i\p\c\d\k\m\0\b\o\8\n\v\k\e\g\3\w\3\6\l\v\o\p\1\f\8\i\r\l\i\n\y\8\t\i\5\3\u\f\v\q\t\q\9\n\b\l\7\k\r\6\k\p\y\f\2\x\i\g\e\5\i\j\7\r\g\g\i\6\2\w\p\y\w\9\0\j\m\u\n\y\8\v\u\i\g\a\2\g\v\i\v\f\3\5\9\j\9\f\a\w\j\b\r\7\4\y\w\v\b\i\f\s\u\k\c\h\x\n\a\3\r\p\v\z\i\a\x\y\a\d\k\3\0\9\p\i\7\i\t\r\r\p\l\9\0\v\a\1\1\l\0\e\v\b\0\3\0\x\v\3\n\h\8\f\7\t\u\k\s\q\h\t\n\9\m\u\a\k\4\m\1\n\s\v\r\e\l\v\s\t\k\l\t\0\3\a\t\v\h\v\g\4\c\f\4\n\5\9\r\c\s\o\1\t\6\w\t\f\p\d\7\3\x\u\d\u\3\6\t\w\d\q\u\u\x\y\7\q\g\y\h\w\8\y\j\c\g\s\4\6\s\5\7\b\0\m\6\f\l\w\k\d\u\1\y\5\1\7\2\6\6\c\2\b\y\m\u\7\x\7\w\t\z\w\g\d\j\8\k\k\x\u\h\z\m\8\a\0\c\x\a\c\k\y\n\r\c\r\z\x\b\2\h\s\r\n\p\w\a\7\d\0\r\x\9\r\g\p\r\w\8\d\q\l\t\i\h\5\d\u\q\f\r\f\1\5\a\w ]] 00:05:59.325 13:44:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:05:59.325 13:44:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:05:59.584 [2024-12-06 13:44:58.737812] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:05:59.584 [2024-12-06 13:44:58.737902] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60297 ] 00:05:59.584 [2024-12-06 13:44:58.879058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.584 [2024-12-06 13:44:58.922285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.843 [2024-12-06 13:44:58.989341] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:59.843  [2024-12-06T13:44:59.506Z] Copying: 512/512 [B] (average 500 kBps) 00:06:00.103 00:06:00.103 13:44:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ jyr4tkaysplba4vt5gqhw3tbg9ckzpnejnhfd70fjx4x9okbpri71364kmtmh4rvvyqu8qqh5wj8ktc7i99wwvz8vjxxgejgeodbxwf74uoji5sl6erfmiaesm8kjpwxo8ybras1yt26fywobf1i2vseyf5q939su8gipcdkm0bo8nvkeg3w36lvop1f8irliny8ti53ufvqtq9nbl7kr6kpyf2xige5ij7rggi62wpyw90jmuny8vuiga2gvivf359j9fawjbr74ywvbifsukchxna3rpvziaxyadk309pi7itrrpl90va11l0evb030xv3nh8f7tuksqhtn9muak4m1nsvrelvstklt03atvhvg4cf4n59rcso1t6wtfpd73xudu36twdquuxy7qgyhw8yjcgs46s57b0m6flwkdu1y517266c2bymu7x7wtzwgdj8kkxuhzm8a0cxackynrcrzxb2hsrnpwa7d0rx9rgprw8dqltih5duqfrf15aw == \j\y\r\4\t\k\a\y\s\p\l\b\a\4\v\t\5\g\q\h\w\3\t\b\g\9\c\k\z\p\n\e\j\n\h\f\d\7\0\f\j\x\4\x\9\o\k\b\p\r\i\7\1\3\6\4\k\m\t\m\h\4\r\v\v\y\q\u\8\q\q\h\5\w\j\8\k\t\c\7\i\9\9\w\w\v\z\8\v\j\x\x\g\e\j\g\e\o\d\b\x\w\f\7\4\u\o\j\i\5\s\l\6\e\r\f\m\i\a\e\s\m\8\k\j\p\w\x\o\8\y\b\r\a\s\1\y\t\2\6\f\y\w\o\b\f\1\i\2\v\s\e\y\f\5\q\9\3\9\s\u\8\g\i\p\c\d\k\m\0\b\o\8\n\v\k\e\g\3\w\3\6\l\v\o\p\1\f\8\i\r\l\i\n\y\8\t\i\5\3\u\f\v\q\t\q\9\n\b\l\7\k\r\6\k\p\y\f\2\x\i\g\e\5\i\j\7\r\g\g\i\6\2\w\p\y\w\9\0\j\m\u\n\y\8\v\u\i\g\a\2\g\v\i\v\f\3\5\9\j\9\f\a\w\j\b\r\7\4\y\w\v\b\i\f\s\u\k\c\h\x\n\a\3\r\p\v\z\i\a\x\y\a\d\k\3\0\9\p\i\7\i\t\r\r\p\l\9\0\v\a\1\1\l\0\e\v\b\0\3\0\x\v\3\n\h\8\f\7\t\u\k\s\q\h\t\n\9\m\u\a\k\4\m\1\n\s\v\r\e\l\v\s\t\k\l\t\0\3\a\t\v\h\v\g\4\c\f\4\n\5\9\r\c\s\o\1\t\6\w\t\f\p\d\7\3\x\u\d\u\3\6\t\w\d\q\u\u\x\y\7\q\g\y\h\w\8\y\j\c\g\s\4\6\s\5\7\b\0\m\6\f\l\w\k\d\u\1\y\5\1\7\2\6\6\c\2\b\y\m\u\7\x\7\w\t\z\w\g\d\j\8\k\k\x\u\h\z\m\8\a\0\c\x\a\c\k\y\n\r\c\r\z\x\b\2\h\s\r\n\p\w\a\7\d\0\r\x\9\r\g\p\r\w\8\d\q\l\t\i\h\5\d\u\q\f\r\f\1\5\a\w ]] 00:06:00.103 13:44:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:00.103 13:44:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:00.103 [2024-12-06 13:44:59.299610] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:06:00.103 [2024-12-06 13:44:59.299722] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60312 ] 00:06:00.103 [2024-12-06 13:44:59.437091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.103 [2024-12-06 13:44:59.481818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.362 [2024-12-06 13:44:59.550304] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:00.362  [2024-12-06T13:45:00.025Z] Copying: 512/512 [B] (average 250 kBps) 00:06:00.621 00:06:00.622 13:44:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ jyr4tkaysplba4vt5gqhw3tbg9ckzpnejnhfd70fjx4x9okbpri71364kmtmh4rvvyqu8qqh5wj8ktc7i99wwvz8vjxxgejgeodbxwf74uoji5sl6erfmiaesm8kjpwxo8ybras1yt26fywobf1i2vseyf5q939su8gipcdkm0bo8nvkeg3w36lvop1f8irliny8ti53ufvqtq9nbl7kr6kpyf2xige5ij7rggi62wpyw90jmuny8vuiga2gvivf359j9fawjbr74ywvbifsukchxna3rpvziaxyadk309pi7itrrpl90va11l0evb030xv3nh8f7tuksqhtn9muak4m1nsvrelvstklt03atvhvg4cf4n59rcso1t6wtfpd73xudu36twdquuxy7qgyhw8yjcgs46s57b0m6flwkdu1y517266c2bymu7x7wtzwgdj8kkxuhzm8a0cxackynrcrzxb2hsrnpwa7d0rx9rgprw8dqltih5duqfrf15aw == \j\y\r\4\t\k\a\y\s\p\l\b\a\4\v\t\5\g\q\h\w\3\t\b\g\9\c\k\z\p\n\e\j\n\h\f\d\7\0\f\j\x\4\x\9\o\k\b\p\r\i\7\1\3\6\4\k\m\t\m\h\4\r\v\v\y\q\u\8\q\q\h\5\w\j\8\k\t\c\7\i\9\9\w\w\v\z\8\v\j\x\x\g\e\j\g\e\o\d\b\x\w\f\7\4\u\o\j\i\5\s\l\6\e\r\f\m\i\a\e\s\m\8\k\j\p\w\x\o\8\y\b\r\a\s\1\y\t\2\6\f\y\w\o\b\f\1\i\2\v\s\e\y\f\5\q\9\3\9\s\u\8\g\i\p\c\d\k\m\0\b\o\8\n\v\k\e\g\3\w\3\6\l\v\o\p\1\f\8\i\r\l\i\n\y\8\t\i\5\3\u\f\v\q\t\q\9\n\b\l\7\k\r\6\k\p\y\f\2\x\i\g\e\5\i\j\7\r\g\g\i\6\2\w\p\y\w\9\0\j\m\u\n\y\8\v\u\i\g\a\2\g\v\i\v\f\3\5\9\j\9\f\a\w\j\b\r\7\4\y\w\v\b\i\f\s\u\k\c\h\x\n\a\3\r\p\v\z\i\a\x\y\a\d\k\3\0\9\p\i\7\i\t\r\r\p\l\9\0\v\a\1\1\l\0\e\v\b\0\3\0\x\v\3\n\h\8\f\7\t\u\k\s\q\h\t\n\9\m\u\a\k\4\m\1\n\s\v\r\e\l\v\s\t\k\l\t\0\3\a\t\v\h\v\g\4\c\f\4\n\5\9\r\c\s\o\1\t\6\w\t\f\p\d\7\3\x\u\d\u\3\6\t\w\d\q\u\u\x\y\7\q\g\y\h\w\8\y\j\c\g\s\4\6\s\5\7\b\0\m\6\f\l\w\k\d\u\1\y\5\1\7\2\6\6\c\2\b\y\m\u\7\x\7\w\t\z\w\g\d\j\8\k\k\x\u\h\z\m\8\a\0\c\x\a\c\k\y\n\r\c\r\z\x\b\2\h\s\r\n\p\w\a\7\d\0\r\x\9\r\g\p\r\w\8\d\q\l\t\i\h\5\d\u\q\f\r\f\1\5\a\w ]] 00:06:00.622 13:44:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:00.622 13:44:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:00.622 [2024-12-06 13:44:59.859956] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:06:00.622 [2024-12-06 13:44:59.860043] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60322 ] 00:06:00.622 [2024-12-06 13:44:59.999551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.880 [2024-12-06 13:45:00.042425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.880 [2024-12-06 13:45:00.108662] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:00.880  [2024-12-06T13:45:00.544Z] Copying: 512/512 [B] (average 250 kBps) 00:06:01.140 00:06:01.140 13:45:00 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ jyr4tkaysplba4vt5gqhw3tbg9ckzpnejnhfd70fjx4x9okbpri71364kmtmh4rvvyqu8qqh5wj8ktc7i99wwvz8vjxxgejgeodbxwf74uoji5sl6erfmiaesm8kjpwxo8ybras1yt26fywobf1i2vseyf5q939su8gipcdkm0bo8nvkeg3w36lvop1f8irliny8ti53ufvqtq9nbl7kr6kpyf2xige5ij7rggi62wpyw90jmuny8vuiga2gvivf359j9fawjbr74ywvbifsukchxna3rpvziaxyadk309pi7itrrpl90va11l0evb030xv3nh8f7tuksqhtn9muak4m1nsvrelvstklt03atvhvg4cf4n59rcso1t6wtfpd73xudu36twdquuxy7qgyhw8yjcgs46s57b0m6flwkdu1y517266c2bymu7x7wtzwgdj8kkxuhzm8a0cxackynrcrzxb2hsrnpwa7d0rx9rgprw8dqltih5duqfrf15aw == \j\y\r\4\t\k\a\y\s\p\l\b\a\4\v\t\5\g\q\h\w\3\t\b\g\9\c\k\z\p\n\e\j\n\h\f\d\7\0\f\j\x\4\x\9\o\k\b\p\r\i\7\1\3\6\4\k\m\t\m\h\4\r\v\v\y\q\u\8\q\q\h\5\w\j\8\k\t\c\7\i\9\9\w\w\v\z\8\v\j\x\x\g\e\j\g\e\o\d\b\x\w\f\7\4\u\o\j\i\5\s\l\6\e\r\f\m\i\a\e\s\m\8\k\j\p\w\x\o\8\y\b\r\a\s\1\y\t\2\6\f\y\w\o\b\f\1\i\2\v\s\e\y\f\5\q\9\3\9\s\u\8\g\i\p\c\d\k\m\0\b\o\8\n\v\k\e\g\3\w\3\6\l\v\o\p\1\f\8\i\r\l\i\n\y\8\t\i\5\3\u\f\v\q\t\q\9\n\b\l\7\k\r\6\k\p\y\f\2\x\i\g\e\5\i\j\7\r\g\g\i\6\2\w\p\y\w\9\0\j\m\u\n\y\8\v\u\i\g\a\2\g\v\i\v\f\3\5\9\j\9\f\a\w\j\b\r\7\4\y\w\v\b\i\f\s\u\k\c\h\x\n\a\3\r\p\v\z\i\a\x\y\a\d\k\3\0\9\p\i\7\i\t\r\r\p\l\9\0\v\a\1\1\l\0\e\v\b\0\3\0\x\v\3\n\h\8\f\7\t\u\k\s\q\h\t\n\9\m\u\a\k\4\m\1\n\s\v\r\e\l\v\s\t\k\l\t\0\3\a\t\v\h\v\g\4\c\f\4\n\5\9\r\c\s\o\1\t\6\w\t\f\p\d\7\3\x\u\d\u\3\6\t\w\d\q\u\u\x\y\7\q\g\y\h\w\8\y\j\c\g\s\4\6\s\5\7\b\0\m\6\f\l\w\k\d\u\1\y\5\1\7\2\6\6\c\2\b\y\m\u\7\x\7\w\t\z\w\g\d\j\8\k\k\x\u\h\z\m\8\a\0\c\x\a\c\k\y\n\r\c\r\z\x\b\2\h\s\r\n\p\w\a\7\d\0\r\x\9\r\g\p\r\w\8\d\q\l\t\i\h\5\d\u\q\f\r\f\1\5\a\w ]] 00:06:01.140 00:06:01.140 real 0m4.581s 00:06:01.140 user 0m2.488s 00:06:01.140 sys 0m2.617s 00:06:01.140 ************************************ 00:06:01.140 END TEST dd_flags_misc 00:06:01.140 ************************************ 00:06:01.140 13:45:00 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:01.140 13:45:00 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:01.140 13:45:00 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:06:01.140 13:45:00 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:06:01.140 * Second test run, disabling liburing, forcing AIO 00:06:01.140 13:45:00 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:06:01.140 13:45:00 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:06:01.140 13:45:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:01.140 13:45:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:01.140 13:45:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:01.140 ************************************ 00:06:01.140 START TEST dd_flag_append_forced_aio 00:06:01.140 ************************************ 00:06:01.140 13:45:00 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1129 -- # append 00:06:01.140 13:45:00 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:06:01.140 13:45:00 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:06:01.140 13:45:00 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:06:01.140 13:45:00 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:01.140 13:45:00 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:01.140 13:45:00 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=c80k6l4dqcibz14q71i4thya98m752rv 00:06:01.140 13:45:00 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:06:01.140 13:45:00 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:01.140 13:45:00 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:01.140 13:45:00 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=sguuizctna1al4d37snpwdcawjnf12ky 00:06:01.140 13:45:00 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s c80k6l4dqcibz14q71i4thya98m752rv 00:06:01.140 13:45:00 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s sguuizctna1al4d37snpwdcawjnf12ky 00:06:01.140 13:45:00 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:06:01.140 [2024-12-06 13:45:00.475800] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:06:01.140 [2024-12-06 13:45:00.475868] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60350 ] 00:06:01.399 [2024-12-06 13:45:00.613041] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.399 [2024-12-06 13:45:00.656435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.399 [2024-12-06 13:45:00.723250] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:01.399  [2024-12-06T13:45:01.063Z] Copying: 32/32 [B] (average 31 kBps) 00:06:01.659 00:06:01.659 13:45:01 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ sguuizctna1al4d37snpwdcawjnf12kyc80k6l4dqcibz14q71i4thya98m752rv == \s\g\u\u\i\z\c\t\n\a\1\a\l\4\d\3\7\s\n\p\w\d\c\a\w\j\n\f\1\2\k\y\c\8\0\k\6\l\4\d\q\c\i\b\z\1\4\q\7\1\i\4\t\h\y\a\9\8\m\7\5\2\r\v ]] 00:06:01.659 00:06:01.659 real 0m0.606s 00:06:01.659 user 0m0.335s 00:06:01.659 sys 0m0.147s 00:06:01.659 13:45:01 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:01.659 ************************************ 00:06:01.659 END TEST dd_flag_append_forced_aio 00:06:01.659 ************************************ 00:06:01.659 13:45:01 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:01.919 13:45:01 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:06:01.919 13:45:01 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:01.919 13:45:01 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:01.919 13:45:01 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:01.919 ************************************ 00:06:01.919 START TEST dd_flag_directory_forced_aio 00:06:01.919 ************************************ 00:06:01.919 13:45:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1129 -- # directory 00:06:01.919 13:45:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:01.919 13:45:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:06:01.919 13:45:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:01.919 13:45:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:01.919 13:45:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:01.919 13:45:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:01.919 13:45:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:01.919 13:45:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:01.919 13:45:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:01.919 13:45:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:01.919 13:45:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:01.919 13:45:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:01.919 [2024-12-06 13:45:01.137536] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:06:01.919 [2024-12-06 13:45:01.137619] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60382 ] 00:06:01.919 [2024-12-06 13:45:01.280912] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.178 [2024-12-06 13:45:01.324711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.178 [2024-12-06 13:45:01.392285] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:02.178 [2024-12-06 13:45:01.438051] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:02.178 [2024-12-06 13:45:01.438135] spdk_dd.c:1081:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:02.178 [2024-12-06 13:45:01.438166] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:02.438 [2024-12-06 13:45:01.596603] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:06:02.438 13:45:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 00:06:02.438 13:45:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:02.438 13:45:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 00:06:02.438 13:45:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:06:02.438 13:45:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:06:02.438 13:45:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:02.438 13:45:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:02.438 13:45:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:06:02.438 13:45:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:02.438 13:45:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:02.438 13:45:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:02.438 13:45:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:02.438 13:45:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:02.438 13:45:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:02.438 13:45:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:02.438 13:45:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:02.438 13:45:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:02.438 13:45:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:02.438 [2024-12-06 13:45:01.718617] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:06:02.438 [2024-12-06 13:45:01.718703] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60386 ] 00:06:02.698 [2024-12-06 13:45:01.855983] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.698 [2024-12-06 13:45:01.899559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.698 [2024-12-06 13:45:01.967726] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:02.698 [2024-12-06 13:45:02.012383] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:02.698 [2024-12-06 13:45:02.012442] spdk_dd.c:1130:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:02.698 [2024-12-06 13:45:02.012471] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:02.957 [2024-12-06 13:45:02.167391] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:06:02.957 13:45:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 00:06:02.957 13:45:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:02.957 13:45:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 00:06:02.957 13:45:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:06:02.957 13:45:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:06:02.957 13:45:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:02.957 00:06:02.957 real 0m1.150s 00:06:02.957 user 0m0.624s 00:06:02.957 sys 0m0.318s 00:06:02.958 13:45:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:02.958 13:45:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:02.958 ************************************ 00:06:02.958 END TEST dd_flag_directory_forced_aio 00:06:02.958 ************************************ 00:06:02.958 13:45:02 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:06:02.958 13:45:02 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:02.958 13:45:02 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:02.958 13:45:02 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:02.958 ************************************ 00:06:02.958 START TEST dd_flag_nofollow_forced_aio 00:06:02.958 ************************************ 00:06:02.958 13:45:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1129 -- # nofollow 00:06:02.958 13:45:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:02.958 13:45:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:02.958 13:45:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:02.958 13:45:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:02.958 13:45:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:02.958 13:45:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:06:02.958 13:45:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:02.958 13:45:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:02.958 13:45:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:02.958 13:45:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:02.958 13:45:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:02.958 13:45:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:02.958 13:45:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:02.958 13:45:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:02.958 13:45:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:02.958 13:45:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:02.958 [2024-12-06 13:45:02.350628] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:06:02.958 [2024-12-06 13:45:02.350711] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60422 ] 00:06:03.221 [2024-12-06 13:45:02.492850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.221 [2024-12-06 13:45:02.535878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.221 [2024-12-06 13:45:02.603168] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:03.484 [2024-12-06 13:45:02.649686] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:03.484 [2024-12-06 13:45:02.649751] spdk_dd.c:1081:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:03.484 [2024-12-06 13:45:02.649780] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:03.484 [2024-12-06 13:45:02.804183] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:06:03.484 13:45:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 00:06:03.484 13:45:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:03.484 13:45:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 00:06:03.484 13:45:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:06:03.484 13:45:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:06:03.484 13:45:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:03.484 13:45:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:03.484 13:45:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:06:03.484 13:45:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:03.484 13:45:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:03.484 13:45:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:03.484 13:45:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:03.484 13:45:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:03.484 13:45:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:03.484 13:45:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:03.484 13:45:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:03.484 13:45:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:03.484 13:45:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:03.743 [2024-12-06 13:45:02.929142] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:06:03.743 [2024-12-06 13:45:02.929229] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60432 ] 00:06:03.743 [2024-12-06 13:45:03.072184] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.743 [2024-12-06 13:45:03.115931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.002 [2024-12-06 13:45:03.183460] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:04.002 [2024-12-06 13:45:03.228133] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:04.002 [2024-12-06 13:45:03.228219] spdk_dd.c:1130:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:04.002 [2024-12-06 13:45:03.228250] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:04.002 [2024-12-06 13:45:03.381733] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:06:04.261 13:45:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 00:06:04.261 13:45:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:04.261 13:45:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 00:06:04.261 13:45:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:06:04.261 13:45:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:06:04.261 13:45:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:04.261 13:45:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:06:04.261 13:45:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:04.261 13:45:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:04.261 13:45:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:04.261 [2024-12-06 13:45:03.507576] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:06:04.261 [2024-12-06 13:45:03.507658] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60439 ] 00:06:04.261 [2024-12-06 13:45:03.649228] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.520 [2024-12-06 13:45:03.692582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.520 [2024-12-06 13:45:03.759082] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:04.520  [2024-12-06T13:45:04.183Z] Copying: 512/512 [B] (average 500 kBps) 00:06:04.779 00:06:04.779 13:45:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ 9mhc4jyvkvv5seh12jj8ysi0xggkmcs4mlao6alfyf7bw77hanso8mr5b12ablzuutqkoqo9jjxvsyoe73hhitcwutff23yu3felc7ajk4h8bh4tbfqrgk4yoyq0dieg2vxx6z74r8op7njwrnnlila62nhywfg6z1uskxto4sfk7m1pgshdnherkmfv2kgnrosx2xvgbzvhpgtdaym6e5027mf4d6epb05nkydnu9govbbkovwftxm4aku8bwc4jiuue5j0l8853opo70e9g8js5w9jhbduivkjxnvc5rvn79aigf2eq8x7hihnhnu4otl6auhke5sgyev835nou8ryabjcltzczti3ti48yvdsj6cpaygvqa27cq9u78392hgiytix5oxukurrgivrq8cdwvwhw6yck6wedp2upndywh3cvoyiv7lpzlrk9ch36v2f3jf5qnywajk9ugl0osj1xdqxw44j35nw4647hn2uwr6wv074dczm7aja588j == \9\m\h\c\4\j\y\v\k\v\v\5\s\e\h\1\2\j\j\8\y\s\i\0\x\g\g\k\m\c\s\4\m\l\a\o\6\a\l\f\y\f\7\b\w\7\7\h\a\n\s\o\8\m\r\5\b\1\2\a\b\l\z\u\u\t\q\k\o\q\o\9\j\j\x\v\s\y\o\e\7\3\h\h\i\t\c\w\u\t\f\f\2\3\y\u\3\f\e\l\c\7\a\j\k\4\h\8\b\h\4\t\b\f\q\r\g\k\4\y\o\y\q\0\d\i\e\g\2\v\x\x\6\z\7\4\r\8\o\p\7\n\j\w\r\n\n\l\i\l\a\6\2\n\h\y\w\f\g\6\z\1\u\s\k\x\t\o\4\s\f\k\7\m\1\p\g\s\h\d\n\h\e\r\k\m\f\v\2\k\g\n\r\o\s\x\2\x\v\g\b\z\v\h\p\g\t\d\a\y\m\6\e\5\0\2\7\m\f\4\d\6\e\p\b\0\5\n\k\y\d\n\u\9\g\o\v\b\b\k\o\v\w\f\t\x\m\4\a\k\u\8\b\w\c\4\j\i\u\u\e\5\j\0\l\8\8\5\3\o\p\o\7\0\e\9\g\8\j\s\5\w\9\j\h\b\d\u\i\v\k\j\x\n\v\c\5\r\v\n\7\9\a\i\g\f\2\e\q\8\x\7\h\i\h\n\h\n\u\4\o\t\l\6\a\u\h\k\e\5\s\g\y\e\v\8\3\5\n\o\u\8\r\y\a\b\j\c\l\t\z\c\z\t\i\3\t\i\4\8\y\v\d\s\j\6\c\p\a\y\g\v\q\a\2\7\c\q\9\u\7\8\3\9\2\h\g\i\y\t\i\x\5\o\x\u\k\u\r\r\g\i\v\r\q\8\c\d\w\v\w\h\w\6\y\c\k\6\w\e\d\p\2\u\p\n\d\y\w\h\3\c\v\o\y\i\v\7\l\p\z\l\r\k\9\c\h\3\6\v\2\f\3\j\f\5\q\n\y\w\a\j\k\9\u\g\l\0\o\s\j\1\x\d\q\x\w\4\4\j\3\5\n\w\4\6\4\7\h\n\2\u\w\r\6\w\v\0\7\4\d\c\z\m\7\a\j\a\5\8\8\j ]] 00:06:04.779 00:06:04.779 real 0m1.759s 00:06:04.779 user 0m0.958s 00:06:04.779 sys 0m0.473s 00:06:04.779 13:45:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.779 13:45:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:04.779 ************************************ 00:06:04.779 END TEST dd_flag_nofollow_forced_aio 00:06:04.779 ************************************ 00:06:04.779 13:45:04 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:06:04.780 13:45:04 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:04.780 13:45:04 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:04.780 13:45:04 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:04.780 ************************************ 00:06:04.780 START TEST dd_flag_noatime_forced_aio 00:06:04.780 ************************************ 00:06:04.780 13:45:04 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1129 -- # noatime 00:06:04.780 13:45:04 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:06:04.780 13:45:04 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:06:04.780 13:45:04 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:06:04.780 13:45:04 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:04.780 13:45:04 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:04.780 13:45:04 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:04.780 13:45:04 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1733492703 00:06:04.780 13:45:04 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:04.780 13:45:04 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1733492704 00:06:04.780 13:45:04 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:06:06.270 13:45:05 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:06.270 [2024-12-06 13:45:05.176512] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:06:06.270 [2024-12-06 13:45:05.176601] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60484 ] 00:06:06.270 [2024-12-06 13:45:05.327996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.270 [2024-12-06 13:45:05.383826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.270 [2024-12-06 13:45:05.454335] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:06.270  [2024-12-06T13:45:05.953Z] Copying: 512/512 [B] (average 500 kBps) 00:06:06.549 00:06:06.549 13:45:05 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:06.549 13:45:05 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1733492703 )) 00:06:06.549 13:45:05 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:06.549 13:45:05 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1733492704 )) 00:06:06.549 13:45:05 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:06.549 [2024-12-06 13:45:05.824587] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:06:06.549 [2024-12-06 13:45:05.824679] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60491 ] 00:06:06.808 [2024-12-06 13:45:05.968710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.808 [2024-12-06 13:45:06.010280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.808 [2024-12-06 13:45:06.076664] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:06.808  [2024-12-06T13:45:06.472Z] Copying: 512/512 [B] (average 500 kBps) 00:06:07.068 00:06:07.068 13:45:06 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:07.068 13:45:06 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1733492706 )) 00:06:07.068 00:06:07.068 real 0m2.289s 00:06:07.068 user 0m0.689s 00:06:07.068 sys 0m0.359s 00:06:07.068 13:45:06 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:07.068 13:45:06 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:07.068 ************************************ 00:06:07.068 END TEST dd_flag_noatime_forced_aio 00:06:07.068 ************************************ 00:06:07.068 13:45:06 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:06:07.068 13:45:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:07.068 13:45:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:07.068 13:45:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:07.068 ************************************ 00:06:07.068 START TEST dd_flags_misc_forced_aio 00:06:07.068 ************************************ 00:06:07.068 13:45:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1129 -- # io 00:06:07.068 13:45:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:06:07.068 13:45:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:06:07.068 13:45:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:06:07.068 13:45:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:07.068 13:45:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:06:07.068 13:45:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:07.068 13:45:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:07.068 13:45:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:07.068 13:45:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:07.327 [2024-12-06 13:45:06.499129] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:06:07.327 [2024-12-06 13:45:06.499213] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60523 ] 00:06:07.327 [2024-12-06 13:45:06.643991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.327 [2024-12-06 13:45:06.686120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.586 [2024-12-06 13:45:06.754141] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:07.586  [2024-12-06T13:45:07.250Z] Copying: 512/512 [B] (average 500 kBps) 00:06:07.846 00:06:07.846 13:45:07 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 4k17kho8g2mgwly8ohd9tejbme5zvmhawo2r74jw52fjhp4v252b6zspcbi107oos9u8n63s60hjuyf9kojxld24qxlhlgtpcolqd9vojrbretfuccnrethfjy3odhdyu55yluo074ev3n5k4559u4edgxkn8svr4012r3cdhl9rv3p5uim1p9n3869zjouy9topihwtobyiyqdtukz83l93ynezklk3snlgb4mm1gbnckwbkwhbwnolzwavyt828nfir4oyjgdg20mf1lnwg5h8e9qy3y9d67v2mmomssmqyb1fxpiwyz0qjo9jrh6grnyqglgzcqgk5byni6r3s0bmm37ivy18uj5ic2xw0k1xs4mtzkl8jbtakv52h9zayv4a1e5mlcxcg5nrkxoianzesqptd7bh24x9ej1d0v0nkfc5rau4uo908d58az7wn3in89vh66zdgeggwyz7hrpylv1ppo8owawjssnqhos4xf2syr43kmxfrlv4atnb == \4\k\1\7\k\h\o\8\g\2\m\g\w\l\y\8\o\h\d\9\t\e\j\b\m\e\5\z\v\m\h\a\w\o\2\r\7\4\j\w\5\2\f\j\h\p\4\v\2\5\2\b\6\z\s\p\c\b\i\1\0\7\o\o\s\9\u\8\n\6\3\s\6\0\h\j\u\y\f\9\k\o\j\x\l\d\2\4\q\x\l\h\l\g\t\p\c\o\l\q\d\9\v\o\j\r\b\r\e\t\f\u\c\c\n\r\e\t\h\f\j\y\3\o\d\h\d\y\u\5\5\y\l\u\o\0\7\4\e\v\3\n\5\k\4\5\5\9\u\4\e\d\g\x\k\n\8\s\v\r\4\0\1\2\r\3\c\d\h\l\9\r\v\3\p\5\u\i\m\1\p\9\n\3\8\6\9\z\j\o\u\y\9\t\o\p\i\h\w\t\o\b\y\i\y\q\d\t\u\k\z\8\3\l\9\3\y\n\e\z\k\l\k\3\s\n\l\g\b\4\m\m\1\g\b\n\c\k\w\b\k\w\h\b\w\n\o\l\z\w\a\v\y\t\8\2\8\n\f\i\r\4\o\y\j\g\d\g\2\0\m\f\1\l\n\w\g\5\h\8\e\9\q\y\3\y\9\d\6\7\v\2\m\m\o\m\s\s\m\q\y\b\1\f\x\p\i\w\y\z\0\q\j\o\9\j\r\h\6\g\r\n\y\q\g\l\g\z\c\q\g\k\5\b\y\n\i\6\r\3\s\0\b\m\m\3\7\i\v\y\1\8\u\j\5\i\c\2\x\w\0\k\1\x\s\4\m\t\z\k\l\8\j\b\t\a\k\v\5\2\h\9\z\a\y\v\4\a\1\e\5\m\l\c\x\c\g\5\n\r\k\x\o\i\a\n\z\e\s\q\p\t\d\7\b\h\2\4\x\9\e\j\1\d\0\v\0\n\k\f\c\5\r\a\u\4\u\o\9\0\8\d\5\8\a\z\7\w\n\3\i\n\8\9\v\h\6\6\z\d\g\e\g\g\w\y\z\7\h\r\p\y\l\v\1\p\p\o\8\o\w\a\w\j\s\s\n\q\h\o\s\4\x\f\2\s\y\r\4\3\k\m\x\f\r\l\v\4\a\t\n\b ]] 00:06:07.846 13:45:07 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:07.846 13:45:07 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:07.846 [2024-12-06 13:45:07.107392] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:06:07.846 [2024-12-06 13:45:07.107483] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60531 ] 00:06:08.105 [2024-12-06 13:45:07.250093] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.105 [2024-12-06 13:45:07.292962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.105 [2024-12-06 13:45:07.359200] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:08.105  [2024-12-06T13:45:07.767Z] Copying: 512/512 [B] (average 500 kBps) 00:06:08.363 00:06:08.363 13:45:07 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 4k17kho8g2mgwly8ohd9tejbme5zvmhawo2r74jw52fjhp4v252b6zspcbi107oos9u8n63s60hjuyf9kojxld24qxlhlgtpcolqd9vojrbretfuccnrethfjy3odhdyu55yluo074ev3n5k4559u4edgxkn8svr4012r3cdhl9rv3p5uim1p9n3869zjouy9topihwtobyiyqdtukz83l93ynezklk3snlgb4mm1gbnckwbkwhbwnolzwavyt828nfir4oyjgdg20mf1lnwg5h8e9qy3y9d67v2mmomssmqyb1fxpiwyz0qjo9jrh6grnyqglgzcqgk5byni6r3s0bmm37ivy18uj5ic2xw0k1xs4mtzkl8jbtakv52h9zayv4a1e5mlcxcg5nrkxoianzesqptd7bh24x9ej1d0v0nkfc5rau4uo908d58az7wn3in89vh66zdgeggwyz7hrpylv1ppo8owawjssnqhos4xf2syr43kmxfrlv4atnb == \4\k\1\7\k\h\o\8\g\2\m\g\w\l\y\8\o\h\d\9\t\e\j\b\m\e\5\z\v\m\h\a\w\o\2\r\7\4\j\w\5\2\f\j\h\p\4\v\2\5\2\b\6\z\s\p\c\b\i\1\0\7\o\o\s\9\u\8\n\6\3\s\6\0\h\j\u\y\f\9\k\o\j\x\l\d\2\4\q\x\l\h\l\g\t\p\c\o\l\q\d\9\v\o\j\r\b\r\e\t\f\u\c\c\n\r\e\t\h\f\j\y\3\o\d\h\d\y\u\5\5\y\l\u\o\0\7\4\e\v\3\n\5\k\4\5\5\9\u\4\e\d\g\x\k\n\8\s\v\r\4\0\1\2\r\3\c\d\h\l\9\r\v\3\p\5\u\i\m\1\p\9\n\3\8\6\9\z\j\o\u\y\9\t\o\p\i\h\w\t\o\b\y\i\y\q\d\t\u\k\z\8\3\l\9\3\y\n\e\z\k\l\k\3\s\n\l\g\b\4\m\m\1\g\b\n\c\k\w\b\k\w\h\b\w\n\o\l\z\w\a\v\y\t\8\2\8\n\f\i\r\4\o\y\j\g\d\g\2\0\m\f\1\l\n\w\g\5\h\8\e\9\q\y\3\y\9\d\6\7\v\2\m\m\o\m\s\s\m\q\y\b\1\f\x\p\i\w\y\z\0\q\j\o\9\j\r\h\6\g\r\n\y\q\g\l\g\z\c\q\g\k\5\b\y\n\i\6\r\3\s\0\b\m\m\3\7\i\v\y\1\8\u\j\5\i\c\2\x\w\0\k\1\x\s\4\m\t\z\k\l\8\j\b\t\a\k\v\5\2\h\9\z\a\y\v\4\a\1\e\5\m\l\c\x\c\g\5\n\r\k\x\o\i\a\n\z\e\s\q\p\t\d\7\b\h\2\4\x\9\e\j\1\d\0\v\0\n\k\f\c\5\r\a\u\4\u\o\9\0\8\d\5\8\a\z\7\w\n\3\i\n\8\9\v\h\6\6\z\d\g\e\g\g\w\y\z\7\h\r\p\y\l\v\1\p\p\o\8\o\w\a\w\j\s\s\n\q\h\o\s\4\x\f\2\s\y\r\4\3\k\m\x\f\r\l\v\4\a\t\n\b ]] 00:06:08.363 13:45:07 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:08.363 13:45:07 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:08.363 [2024-12-06 13:45:07.712669] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:06:08.363 [2024-12-06 13:45:07.712759] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60538 ] 00:06:08.621 [2024-12-06 13:45:07.856391] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.621 [2024-12-06 13:45:07.897494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.621 [2024-12-06 13:45:07.966365] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:08.621  [2024-12-06T13:45:08.284Z] Copying: 512/512 [B] (average 125 kBps) 00:06:08.880 00:06:08.880 13:45:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 4k17kho8g2mgwly8ohd9tejbme5zvmhawo2r74jw52fjhp4v252b6zspcbi107oos9u8n63s60hjuyf9kojxld24qxlhlgtpcolqd9vojrbretfuccnrethfjy3odhdyu55yluo074ev3n5k4559u4edgxkn8svr4012r3cdhl9rv3p5uim1p9n3869zjouy9topihwtobyiyqdtukz83l93ynezklk3snlgb4mm1gbnckwbkwhbwnolzwavyt828nfir4oyjgdg20mf1lnwg5h8e9qy3y9d67v2mmomssmqyb1fxpiwyz0qjo9jrh6grnyqglgzcqgk5byni6r3s0bmm37ivy18uj5ic2xw0k1xs4mtzkl8jbtakv52h9zayv4a1e5mlcxcg5nrkxoianzesqptd7bh24x9ej1d0v0nkfc5rau4uo908d58az7wn3in89vh66zdgeggwyz7hrpylv1ppo8owawjssnqhos4xf2syr43kmxfrlv4atnb == \4\k\1\7\k\h\o\8\g\2\m\g\w\l\y\8\o\h\d\9\t\e\j\b\m\e\5\z\v\m\h\a\w\o\2\r\7\4\j\w\5\2\f\j\h\p\4\v\2\5\2\b\6\z\s\p\c\b\i\1\0\7\o\o\s\9\u\8\n\6\3\s\6\0\h\j\u\y\f\9\k\o\j\x\l\d\2\4\q\x\l\h\l\g\t\p\c\o\l\q\d\9\v\o\j\r\b\r\e\t\f\u\c\c\n\r\e\t\h\f\j\y\3\o\d\h\d\y\u\5\5\y\l\u\o\0\7\4\e\v\3\n\5\k\4\5\5\9\u\4\e\d\g\x\k\n\8\s\v\r\4\0\1\2\r\3\c\d\h\l\9\r\v\3\p\5\u\i\m\1\p\9\n\3\8\6\9\z\j\o\u\y\9\t\o\p\i\h\w\t\o\b\y\i\y\q\d\t\u\k\z\8\3\l\9\3\y\n\e\z\k\l\k\3\s\n\l\g\b\4\m\m\1\g\b\n\c\k\w\b\k\w\h\b\w\n\o\l\z\w\a\v\y\t\8\2\8\n\f\i\r\4\o\y\j\g\d\g\2\0\m\f\1\l\n\w\g\5\h\8\e\9\q\y\3\y\9\d\6\7\v\2\m\m\o\m\s\s\m\q\y\b\1\f\x\p\i\w\y\z\0\q\j\o\9\j\r\h\6\g\r\n\y\q\g\l\g\z\c\q\g\k\5\b\y\n\i\6\r\3\s\0\b\m\m\3\7\i\v\y\1\8\u\j\5\i\c\2\x\w\0\k\1\x\s\4\m\t\z\k\l\8\j\b\t\a\k\v\5\2\h\9\z\a\y\v\4\a\1\e\5\m\l\c\x\c\g\5\n\r\k\x\o\i\a\n\z\e\s\q\p\t\d\7\b\h\2\4\x\9\e\j\1\d\0\v\0\n\k\f\c\5\r\a\u\4\u\o\9\0\8\d\5\8\a\z\7\w\n\3\i\n\8\9\v\h\6\6\z\d\g\e\g\g\w\y\z\7\h\r\p\y\l\v\1\p\p\o\8\o\w\a\w\j\s\s\n\q\h\o\s\4\x\f\2\s\y\r\4\3\k\m\x\f\r\l\v\4\a\t\n\b ]] 00:06:08.880 13:45:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:08.880 13:45:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:09.138 [2024-12-06 13:45:08.285386] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:06:09.138 [2024-12-06 13:45:08.285501] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60550 ] 00:06:09.138 [2024-12-06 13:45:08.424496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.138 [2024-12-06 13:45:08.469649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.138 [2024-12-06 13:45:08.535990] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:09.397  [2024-12-06T13:45:09.072Z] Copying: 512/512 [B] (average 500 kBps) 00:06:09.668 00:06:09.669 13:45:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 4k17kho8g2mgwly8ohd9tejbme5zvmhawo2r74jw52fjhp4v252b6zspcbi107oos9u8n63s60hjuyf9kojxld24qxlhlgtpcolqd9vojrbretfuccnrethfjy3odhdyu55yluo074ev3n5k4559u4edgxkn8svr4012r3cdhl9rv3p5uim1p9n3869zjouy9topihwtobyiyqdtukz83l93ynezklk3snlgb4mm1gbnckwbkwhbwnolzwavyt828nfir4oyjgdg20mf1lnwg5h8e9qy3y9d67v2mmomssmqyb1fxpiwyz0qjo9jrh6grnyqglgzcqgk5byni6r3s0bmm37ivy18uj5ic2xw0k1xs4mtzkl8jbtakv52h9zayv4a1e5mlcxcg5nrkxoianzesqptd7bh24x9ej1d0v0nkfc5rau4uo908d58az7wn3in89vh66zdgeggwyz7hrpylv1ppo8owawjssnqhos4xf2syr43kmxfrlv4atnb == \4\k\1\7\k\h\o\8\g\2\m\g\w\l\y\8\o\h\d\9\t\e\j\b\m\e\5\z\v\m\h\a\w\o\2\r\7\4\j\w\5\2\f\j\h\p\4\v\2\5\2\b\6\z\s\p\c\b\i\1\0\7\o\o\s\9\u\8\n\6\3\s\6\0\h\j\u\y\f\9\k\o\j\x\l\d\2\4\q\x\l\h\l\g\t\p\c\o\l\q\d\9\v\o\j\r\b\r\e\t\f\u\c\c\n\r\e\t\h\f\j\y\3\o\d\h\d\y\u\5\5\y\l\u\o\0\7\4\e\v\3\n\5\k\4\5\5\9\u\4\e\d\g\x\k\n\8\s\v\r\4\0\1\2\r\3\c\d\h\l\9\r\v\3\p\5\u\i\m\1\p\9\n\3\8\6\9\z\j\o\u\y\9\t\o\p\i\h\w\t\o\b\y\i\y\q\d\t\u\k\z\8\3\l\9\3\y\n\e\z\k\l\k\3\s\n\l\g\b\4\m\m\1\g\b\n\c\k\w\b\k\w\h\b\w\n\o\l\z\w\a\v\y\t\8\2\8\n\f\i\r\4\o\y\j\g\d\g\2\0\m\f\1\l\n\w\g\5\h\8\e\9\q\y\3\y\9\d\6\7\v\2\m\m\o\m\s\s\m\q\y\b\1\f\x\p\i\w\y\z\0\q\j\o\9\j\r\h\6\g\r\n\y\q\g\l\g\z\c\q\g\k\5\b\y\n\i\6\r\3\s\0\b\m\m\3\7\i\v\y\1\8\u\j\5\i\c\2\x\w\0\k\1\x\s\4\m\t\z\k\l\8\j\b\t\a\k\v\5\2\h\9\z\a\y\v\4\a\1\e\5\m\l\c\x\c\g\5\n\r\k\x\o\i\a\n\z\e\s\q\p\t\d\7\b\h\2\4\x\9\e\j\1\d\0\v\0\n\k\f\c\5\r\a\u\4\u\o\9\0\8\d\5\8\a\z\7\w\n\3\i\n\8\9\v\h\6\6\z\d\g\e\g\g\w\y\z\7\h\r\p\y\l\v\1\p\p\o\8\o\w\a\w\j\s\s\n\q\h\o\s\4\x\f\2\s\y\r\4\3\k\m\x\f\r\l\v\4\a\t\n\b ]] 00:06:09.669 13:45:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:09.669 13:45:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:06:09.669 13:45:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:09.669 13:45:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:09.669 13:45:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:09.669 13:45:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:09.669 [2024-12-06 13:45:08.905575] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:06:09.669 [2024-12-06 13:45:08.905678] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60553 ] 00:06:09.669 [2024-12-06 13:45:09.048743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.927 [2024-12-06 13:45:09.091096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.927 [2024-12-06 13:45:09.157405] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:09.927  [2024-12-06T13:45:09.590Z] Copying: 512/512 [B] (average 500 kBps) 00:06:10.186 00:06:10.186 13:45:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ kp8wi0uxnv5443j6sogck3budc2pbypjmobottkouni105el6y0z6pdum1jqp2cpj3mk5lzrr6musqyrlw05yg4dwmd2jh8ns6nsgom8vld0gqktokjpq1440p3xh1svpna9uru9cj594fqaygwtw8ou3bbyo30vebyra95ijzfureuydxvb19pub26ctg9vhxitkk2nerofoglo0du0hgoce6z1m6tybtfy1hzmdp3ye6lmchvh4s2thcliq6hd1r6112x9gvh7917u2v0j690c5oxdytij95vryf1vybuqhn695ueowqfhdwmbvxxyltz8j0ukrrlnj8xnqfs0ud1wjv7gy7eq0t3wxrqtkaua5nsth6rb0h89wrjdz1fpoifvvhzhahipay625jyu6uj20mh4wqs1twuoa2racs9tyfvow0e2r0ds29yiwpm8x9ay45mva4136knenbtm8gsm0pmzx7h4yvwv3ygth28udln9iiq4623iysyy2fuc == \k\p\8\w\i\0\u\x\n\v\5\4\4\3\j\6\s\o\g\c\k\3\b\u\d\c\2\p\b\y\p\j\m\o\b\o\t\t\k\o\u\n\i\1\0\5\e\l\6\y\0\z\6\p\d\u\m\1\j\q\p\2\c\p\j\3\m\k\5\l\z\r\r\6\m\u\s\q\y\r\l\w\0\5\y\g\4\d\w\m\d\2\j\h\8\n\s\6\n\s\g\o\m\8\v\l\d\0\g\q\k\t\o\k\j\p\q\1\4\4\0\p\3\x\h\1\s\v\p\n\a\9\u\r\u\9\c\j\5\9\4\f\q\a\y\g\w\t\w\8\o\u\3\b\b\y\o\3\0\v\e\b\y\r\a\9\5\i\j\z\f\u\r\e\u\y\d\x\v\b\1\9\p\u\b\2\6\c\t\g\9\v\h\x\i\t\k\k\2\n\e\r\o\f\o\g\l\o\0\d\u\0\h\g\o\c\e\6\z\1\m\6\t\y\b\t\f\y\1\h\z\m\d\p\3\y\e\6\l\m\c\h\v\h\4\s\2\t\h\c\l\i\q\6\h\d\1\r\6\1\1\2\x\9\g\v\h\7\9\1\7\u\2\v\0\j\6\9\0\c\5\o\x\d\y\t\i\j\9\5\v\r\y\f\1\v\y\b\u\q\h\n\6\9\5\u\e\o\w\q\f\h\d\w\m\b\v\x\x\y\l\t\z\8\j\0\u\k\r\r\l\n\j\8\x\n\q\f\s\0\u\d\1\w\j\v\7\g\y\7\e\q\0\t\3\w\x\r\q\t\k\a\u\a\5\n\s\t\h\6\r\b\0\h\8\9\w\r\j\d\z\1\f\p\o\i\f\v\v\h\z\h\a\h\i\p\a\y\6\2\5\j\y\u\6\u\j\2\0\m\h\4\w\q\s\1\t\w\u\o\a\2\r\a\c\s\9\t\y\f\v\o\w\0\e\2\r\0\d\s\2\9\y\i\w\p\m\8\x\9\a\y\4\5\m\v\a\4\1\3\6\k\n\e\n\b\t\m\8\g\s\m\0\p\m\z\x\7\h\4\y\v\w\v\3\y\g\t\h\2\8\u\d\l\n\9\i\i\q\4\6\2\3\i\y\s\y\y\2\f\u\c ]] 00:06:10.186 13:45:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:10.186 13:45:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:10.186 [2024-12-06 13:45:09.504429] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:06:10.186 [2024-12-06 13:45:09.504516] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60566 ] 00:06:10.445 [2024-12-06 13:45:09.648384] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.445 [2024-12-06 13:45:09.687765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.445 [2024-12-06 13:45:09.754831] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:10.445  [2024-12-06T13:45:10.107Z] Copying: 512/512 [B] (average 500 kBps) 00:06:10.703 00:06:10.703 13:45:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ kp8wi0uxnv5443j6sogck3budc2pbypjmobottkouni105el6y0z6pdum1jqp2cpj3mk5lzrr6musqyrlw05yg4dwmd2jh8ns6nsgom8vld0gqktokjpq1440p3xh1svpna9uru9cj594fqaygwtw8ou3bbyo30vebyra95ijzfureuydxvb19pub26ctg9vhxitkk2nerofoglo0du0hgoce6z1m6tybtfy1hzmdp3ye6lmchvh4s2thcliq6hd1r6112x9gvh7917u2v0j690c5oxdytij95vryf1vybuqhn695ueowqfhdwmbvxxyltz8j0ukrrlnj8xnqfs0ud1wjv7gy7eq0t3wxrqtkaua5nsth6rb0h89wrjdz1fpoifvvhzhahipay625jyu6uj20mh4wqs1twuoa2racs9tyfvow0e2r0ds29yiwpm8x9ay45mva4136knenbtm8gsm0pmzx7h4yvwv3ygth28udln9iiq4623iysyy2fuc == \k\p\8\w\i\0\u\x\n\v\5\4\4\3\j\6\s\o\g\c\k\3\b\u\d\c\2\p\b\y\p\j\m\o\b\o\t\t\k\o\u\n\i\1\0\5\e\l\6\y\0\z\6\p\d\u\m\1\j\q\p\2\c\p\j\3\m\k\5\l\z\r\r\6\m\u\s\q\y\r\l\w\0\5\y\g\4\d\w\m\d\2\j\h\8\n\s\6\n\s\g\o\m\8\v\l\d\0\g\q\k\t\o\k\j\p\q\1\4\4\0\p\3\x\h\1\s\v\p\n\a\9\u\r\u\9\c\j\5\9\4\f\q\a\y\g\w\t\w\8\o\u\3\b\b\y\o\3\0\v\e\b\y\r\a\9\5\i\j\z\f\u\r\e\u\y\d\x\v\b\1\9\p\u\b\2\6\c\t\g\9\v\h\x\i\t\k\k\2\n\e\r\o\f\o\g\l\o\0\d\u\0\h\g\o\c\e\6\z\1\m\6\t\y\b\t\f\y\1\h\z\m\d\p\3\y\e\6\l\m\c\h\v\h\4\s\2\t\h\c\l\i\q\6\h\d\1\r\6\1\1\2\x\9\g\v\h\7\9\1\7\u\2\v\0\j\6\9\0\c\5\o\x\d\y\t\i\j\9\5\v\r\y\f\1\v\y\b\u\q\h\n\6\9\5\u\e\o\w\q\f\h\d\w\m\b\v\x\x\y\l\t\z\8\j\0\u\k\r\r\l\n\j\8\x\n\q\f\s\0\u\d\1\w\j\v\7\g\y\7\e\q\0\t\3\w\x\r\q\t\k\a\u\a\5\n\s\t\h\6\r\b\0\h\8\9\w\r\j\d\z\1\f\p\o\i\f\v\v\h\z\h\a\h\i\p\a\y\6\2\5\j\y\u\6\u\j\2\0\m\h\4\w\q\s\1\t\w\u\o\a\2\r\a\c\s\9\t\y\f\v\o\w\0\e\2\r\0\d\s\2\9\y\i\w\p\m\8\x\9\a\y\4\5\m\v\a\4\1\3\6\k\n\e\n\b\t\m\8\g\s\m\0\p\m\z\x\7\h\4\y\v\w\v\3\y\g\t\h\2\8\u\d\l\n\9\i\i\q\4\6\2\3\i\y\s\y\y\2\f\u\c ]] 00:06:10.703 13:45:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:10.704 13:45:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:10.704 [2024-12-06 13:45:10.096603] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:06:10.704 [2024-12-06 13:45:10.096705] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60574 ] 00:06:10.962 [2024-12-06 13:45:10.239945] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.962 [2024-12-06 13:45:10.283755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.962 [2024-12-06 13:45:10.350376] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:11.221  [2024-12-06T13:45:10.883Z] Copying: 512/512 [B] (average 250 kBps) 00:06:11.479 00:06:11.479 13:45:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ kp8wi0uxnv5443j6sogck3budc2pbypjmobottkouni105el6y0z6pdum1jqp2cpj3mk5lzrr6musqyrlw05yg4dwmd2jh8ns6nsgom8vld0gqktokjpq1440p3xh1svpna9uru9cj594fqaygwtw8ou3bbyo30vebyra95ijzfureuydxvb19pub26ctg9vhxitkk2nerofoglo0du0hgoce6z1m6tybtfy1hzmdp3ye6lmchvh4s2thcliq6hd1r6112x9gvh7917u2v0j690c5oxdytij95vryf1vybuqhn695ueowqfhdwmbvxxyltz8j0ukrrlnj8xnqfs0ud1wjv7gy7eq0t3wxrqtkaua5nsth6rb0h89wrjdz1fpoifvvhzhahipay625jyu6uj20mh4wqs1twuoa2racs9tyfvow0e2r0ds29yiwpm8x9ay45mva4136knenbtm8gsm0pmzx7h4yvwv3ygth28udln9iiq4623iysyy2fuc == \k\p\8\w\i\0\u\x\n\v\5\4\4\3\j\6\s\o\g\c\k\3\b\u\d\c\2\p\b\y\p\j\m\o\b\o\t\t\k\o\u\n\i\1\0\5\e\l\6\y\0\z\6\p\d\u\m\1\j\q\p\2\c\p\j\3\m\k\5\l\z\r\r\6\m\u\s\q\y\r\l\w\0\5\y\g\4\d\w\m\d\2\j\h\8\n\s\6\n\s\g\o\m\8\v\l\d\0\g\q\k\t\o\k\j\p\q\1\4\4\0\p\3\x\h\1\s\v\p\n\a\9\u\r\u\9\c\j\5\9\4\f\q\a\y\g\w\t\w\8\o\u\3\b\b\y\o\3\0\v\e\b\y\r\a\9\5\i\j\z\f\u\r\e\u\y\d\x\v\b\1\9\p\u\b\2\6\c\t\g\9\v\h\x\i\t\k\k\2\n\e\r\o\f\o\g\l\o\0\d\u\0\h\g\o\c\e\6\z\1\m\6\t\y\b\t\f\y\1\h\z\m\d\p\3\y\e\6\l\m\c\h\v\h\4\s\2\t\h\c\l\i\q\6\h\d\1\r\6\1\1\2\x\9\g\v\h\7\9\1\7\u\2\v\0\j\6\9\0\c\5\o\x\d\y\t\i\j\9\5\v\r\y\f\1\v\y\b\u\q\h\n\6\9\5\u\e\o\w\q\f\h\d\w\m\b\v\x\x\y\l\t\z\8\j\0\u\k\r\r\l\n\j\8\x\n\q\f\s\0\u\d\1\w\j\v\7\g\y\7\e\q\0\t\3\w\x\r\q\t\k\a\u\a\5\n\s\t\h\6\r\b\0\h\8\9\w\r\j\d\z\1\f\p\o\i\f\v\v\h\z\h\a\h\i\p\a\y\6\2\5\j\y\u\6\u\j\2\0\m\h\4\w\q\s\1\t\w\u\o\a\2\r\a\c\s\9\t\y\f\v\o\w\0\e\2\r\0\d\s\2\9\y\i\w\p\m\8\x\9\a\y\4\5\m\v\a\4\1\3\6\k\n\e\n\b\t\m\8\g\s\m\0\p\m\z\x\7\h\4\y\v\w\v\3\y\g\t\h\2\8\u\d\l\n\9\i\i\q\4\6\2\3\i\y\s\y\y\2\f\u\c ]] 00:06:11.479 13:45:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:11.479 13:45:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:11.479 [2024-12-06 13:45:10.691838] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:06:11.480 [2024-12-06 13:45:10.691923] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60581 ] 00:06:11.480 [2024-12-06 13:45:10.836140] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.480 [2024-12-06 13:45:10.875709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.738 [2024-12-06 13:45:10.944128] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:11.738  [2024-12-06T13:45:11.401Z] Copying: 512/512 [B] (average 250 kBps) 00:06:11.997 00:06:11.997 13:45:11 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ kp8wi0uxnv5443j6sogck3budc2pbypjmobottkouni105el6y0z6pdum1jqp2cpj3mk5lzrr6musqyrlw05yg4dwmd2jh8ns6nsgom8vld0gqktokjpq1440p3xh1svpna9uru9cj594fqaygwtw8ou3bbyo30vebyra95ijzfureuydxvb19pub26ctg9vhxitkk2nerofoglo0du0hgoce6z1m6tybtfy1hzmdp3ye6lmchvh4s2thcliq6hd1r6112x9gvh7917u2v0j690c5oxdytij95vryf1vybuqhn695ueowqfhdwmbvxxyltz8j0ukrrlnj8xnqfs0ud1wjv7gy7eq0t3wxrqtkaua5nsth6rb0h89wrjdz1fpoifvvhzhahipay625jyu6uj20mh4wqs1twuoa2racs9tyfvow0e2r0ds29yiwpm8x9ay45mva4136knenbtm8gsm0pmzx7h4yvwv3ygth28udln9iiq4623iysyy2fuc == \k\p\8\w\i\0\u\x\n\v\5\4\4\3\j\6\s\o\g\c\k\3\b\u\d\c\2\p\b\y\p\j\m\o\b\o\t\t\k\o\u\n\i\1\0\5\e\l\6\y\0\z\6\p\d\u\m\1\j\q\p\2\c\p\j\3\m\k\5\l\z\r\r\6\m\u\s\q\y\r\l\w\0\5\y\g\4\d\w\m\d\2\j\h\8\n\s\6\n\s\g\o\m\8\v\l\d\0\g\q\k\t\o\k\j\p\q\1\4\4\0\p\3\x\h\1\s\v\p\n\a\9\u\r\u\9\c\j\5\9\4\f\q\a\y\g\w\t\w\8\o\u\3\b\b\y\o\3\0\v\e\b\y\r\a\9\5\i\j\z\f\u\r\e\u\y\d\x\v\b\1\9\p\u\b\2\6\c\t\g\9\v\h\x\i\t\k\k\2\n\e\r\o\f\o\g\l\o\0\d\u\0\h\g\o\c\e\6\z\1\m\6\t\y\b\t\f\y\1\h\z\m\d\p\3\y\e\6\l\m\c\h\v\h\4\s\2\t\h\c\l\i\q\6\h\d\1\r\6\1\1\2\x\9\g\v\h\7\9\1\7\u\2\v\0\j\6\9\0\c\5\o\x\d\y\t\i\j\9\5\v\r\y\f\1\v\y\b\u\q\h\n\6\9\5\u\e\o\w\q\f\h\d\w\m\b\v\x\x\y\l\t\z\8\j\0\u\k\r\r\l\n\j\8\x\n\q\f\s\0\u\d\1\w\j\v\7\g\y\7\e\q\0\t\3\w\x\r\q\t\k\a\u\a\5\n\s\t\h\6\r\b\0\h\8\9\w\r\j\d\z\1\f\p\o\i\f\v\v\h\z\h\a\h\i\p\a\y\6\2\5\j\y\u\6\u\j\2\0\m\h\4\w\q\s\1\t\w\u\o\a\2\r\a\c\s\9\t\y\f\v\o\w\0\e\2\r\0\d\s\2\9\y\i\w\p\m\8\x\9\a\y\4\5\m\v\a\4\1\3\6\k\n\e\n\b\t\m\8\g\s\m\0\p\m\z\x\7\h\4\y\v\w\v\3\y\g\t\h\2\8\u\d\l\n\9\i\i\q\4\6\2\3\i\y\s\y\y\2\f\u\c ]] 00:06:11.997 00:06:11.997 real 0m4.805s 00:06:11.997 user 0m2.601s 00:06:11.997 sys 0m1.238s 00:06:11.997 13:45:11 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:11.997 ************************************ 00:06:11.997 END TEST dd_flags_misc_forced_aio 00:06:11.997 13:45:11 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:11.997 ************************************ 00:06:11.997 13:45:11 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:06:11.997 13:45:11 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:11.997 13:45:11 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:11.997 ************************************ 00:06:11.997 END TEST spdk_dd_posix 00:06:11.997 ************************************ 00:06:11.997 00:06:11.997 real 0m21.793s 00:06:11.997 user 0m10.623s 00:06:11.997 sys 0m7.626s 00:06:11.997 13:45:11 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:11.997 13:45:11 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:11.997 13:45:11 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:06:11.997 13:45:11 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:11.997 13:45:11 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:11.997 13:45:11 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:11.997 ************************************ 00:06:11.997 START TEST spdk_dd_malloc 00:06:11.997 ************************************ 00:06:11.997 13:45:11 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:06:12.256 * Looking for test storage... 00:06:12.256 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:12.256 13:45:11 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:12.256 13:45:11 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1711 -- # lcov --version 00:06:12.256 13:45:11 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:12.256 13:45:11 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:12.256 13:45:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:12.256 13:45:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:12.256 13:45:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:12.256 13:45:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # IFS=.-: 00:06:12.256 13:45:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # read -ra ver1 00:06:12.256 13:45:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # IFS=.-: 00:06:12.256 13:45:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # read -ra ver2 00:06:12.256 13:45:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@338 -- # local 'op=<' 00:06:12.256 13:45:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@340 -- # ver1_l=2 00:06:12.256 13:45:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@341 -- # ver2_l=1 00:06:12.256 13:45:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:12.256 13:45:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@344 -- # case "$op" in 00:06:12.256 13:45:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@345 -- # : 1 00:06:12.256 13:45:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:12.256 13:45:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:12.256 13:45:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # decimal 1 00:06:12.256 13:45:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=1 00:06:12.256 13:45:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:12.256 13:45:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 1 00:06:12.256 13:45:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:12.256 13:45:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # decimal 2 00:06:12.256 13:45:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=2 00:06:12.256 13:45:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:12.256 13:45:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 2 00:06:12.256 13:45:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:12.256 13:45:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:12.256 13:45:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:12.256 13:45:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # return 0 00:06:12.256 13:45:11 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:12.256 13:45:11 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:12.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.256 --rc genhtml_branch_coverage=1 00:06:12.256 --rc genhtml_function_coverage=1 00:06:12.256 --rc genhtml_legend=1 00:06:12.256 --rc geninfo_all_blocks=1 00:06:12.256 --rc geninfo_unexecuted_blocks=1 00:06:12.256 00:06:12.256 ' 00:06:12.256 13:45:11 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:12.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.256 --rc genhtml_branch_coverage=1 00:06:12.256 --rc genhtml_function_coverage=1 00:06:12.256 --rc genhtml_legend=1 00:06:12.256 --rc geninfo_all_blocks=1 00:06:12.256 --rc geninfo_unexecuted_blocks=1 00:06:12.256 00:06:12.256 ' 00:06:12.256 13:45:11 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:12.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.256 --rc genhtml_branch_coverage=1 00:06:12.256 --rc genhtml_function_coverage=1 00:06:12.256 --rc genhtml_legend=1 00:06:12.256 --rc geninfo_all_blocks=1 00:06:12.256 --rc geninfo_unexecuted_blocks=1 00:06:12.256 00:06:12.256 ' 00:06:12.256 13:45:11 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:12.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.256 --rc genhtml_branch_coverage=1 00:06:12.256 --rc genhtml_function_coverage=1 00:06:12.256 --rc genhtml_legend=1 00:06:12.256 --rc geninfo_all_blocks=1 00:06:12.256 --rc geninfo_unexecuted_blocks=1 00:06:12.256 00:06:12.256 ' 00:06:12.256 13:45:11 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:12.256 13:45:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@15 -- # shopt -s extglob 00:06:12.256 13:45:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:12.256 13:45:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:12.256 13:45:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:12.256 13:45:11 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.256 13:45:11 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.256 13:45:11 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.256 13:45:11 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:06:12.256 13:45:11 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.256 13:45:11 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:06:12.256 13:45:11 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:12.256 13:45:11 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.256 13:45:11 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:06:12.256 ************************************ 00:06:12.256 START TEST dd_malloc_copy 00:06:12.256 ************************************ 00:06:12.256 13:45:11 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1129 -- # malloc_copy 00:06:12.256 13:45:11 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:06:12.256 13:45:11 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:06:12.256 13:45:11 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:06:12.256 13:45:11 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:06:12.256 13:45:11 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:06:12.256 13:45:11 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:06:12.256 13:45:11 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:06:12.256 13:45:11 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:06:12.256 13:45:11 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:12.256 13:45:11 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:06:12.256 [2024-12-06 13:45:11.583741] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:06:12.256 [2024-12-06 13:45:11.584480] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60663 ] 00:06:12.256 { 00:06:12.256 "subsystems": [ 00:06:12.256 { 00:06:12.256 "subsystem": "bdev", 00:06:12.256 "config": [ 00:06:12.256 { 00:06:12.256 "params": { 00:06:12.256 "block_size": 512, 00:06:12.256 "num_blocks": 1048576, 00:06:12.256 "name": "malloc0" 00:06:12.256 }, 00:06:12.256 "method": "bdev_malloc_create" 00:06:12.256 }, 00:06:12.256 { 00:06:12.256 "params": { 00:06:12.256 "block_size": 512, 00:06:12.256 "num_blocks": 1048576, 00:06:12.256 "name": "malloc1" 00:06:12.256 }, 00:06:12.256 "method": "bdev_malloc_create" 00:06:12.256 }, 00:06:12.256 { 00:06:12.256 "method": "bdev_wait_for_examine" 00:06:12.256 } 00:06:12.256 ] 00:06:12.256 } 00:06:12.256 ] 00:06:12.256 } 00:06:12.516 [2024-12-06 13:45:11.728628] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.516 [2024-12-06 13:45:11.772890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.516 [2024-12-06 13:45:11.840836] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:13.904  [2024-12-06T13:45:14.688Z] Copying: 242/512 [MB] (242 MBps) [2024-12-06T13:45:14.688Z] Copying: 483/512 [MB] (241 MBps) [2024-12-06T13:45:15.256Z] Copying: 512/512 [MB] (average 241 MBps) 00:06:15.852 00:06:15.852 13:45:15 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:06:15.852 13:45:15 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:06:15.852 13:45:15 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:15.852 13:45:15 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:06:15.852 [2024-12-06 13:45:15.182712] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:06:15.852 { 00:06:15.852 "subsystems": [ 00:06:15.852 { 00:06:15.852 "subsystem": "bdev", 00:06:15.852 "config": [ 00:06:15.852 { 00:06:15.852 "params": { 00:06:15.852 "block_size": 512, 00:06:15.852 "num_blocks": 1048576, 00:06:15.852 "name": "malloc0" 00:06:15.852 }, 00:06:15.852 "method": "bdev_malloc_create" 00:06:15.852 }, 00:06:15.852 { 00:06:15.852 "params": { 00:06:15.852 "block_size": 512, 00:06:15.853 "num_blocks": 1048576, 00:06:15.853 "name": "malloc1" 00:06:15.853 }, 00:06:15.853 "method": "bdev_malloc_create" 00:06:15.853 }, 00:06:15.853 { 00:06:15.853 "method": "bdev_wait_for_examine" 00:06:15.853 } 00:06:15.853 ] 00:06:15.853 } 00:06:15.853 ] 00:06:15.853 } 00:06:15.853 [2024-12-06 13:45:15.183491] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60713 ] 00:06:16.111 [2024-12-06 13:45:15.327068] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.111 [2024-12-06 13:45:15.367665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.111 [2024-12-06 13:45:15.434545] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:17.488  [2024-12-06T13:45:18.270Z] Copying: 244/512 [MB] (244 MBps) [2024-12-06T13:45:18.270Z] Copying: 488/512 [MB] (244 MBps) [2024-12-06T13:45:18.838Z] Copying: 512/512 [MB] (average 244 MBps) 00:06:19.434 00:06:19.434 ************************************ 00:06:19.434 END TEST dd_malloc_copy 00:06:19.434 ************************************ 00:06:19.434 00:06:19.434 real 0m7.166s 00:06:19.434 user 0m6.004s 00:06:19.434 sys 0m1.004s 00:06:19.434 13:45:18 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:19.434 13:45:18 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:06:19.434 ************************************ 00:06:19.434 END TEST spdk_dd_malloc 00:06:19.434 ************************************ 00:06:19.434 00:06:19.434 real 0m7.394s 00:06:19.434 user 0m6.119s 00:06:19.434 sys 0m1.114s 00:06:19.434 13:45:18 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:19.434 13:45:18 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:06:19.434 13:45:18 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:06:19.434 13:45:18 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:19.434 13:45:18 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:19.434 13:45:18 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:19.434 ************************************ 00:06:19.434 START TEST spdk_dd_bdev_to_bdev 00:06:19.434 ************************************ 00:06:19.434 13:45:18 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:06:19.694 * Looking for test storage... 00:06:19.694 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:19.694 13:45:18 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:19.694 13:45:18 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:19.694 13:45:18 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1711 -- # lcov --version 00:06:19.694 13:45:18 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:19.694 13:45:18 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:19.694 13:45:18 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:19.694 13:45:18 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:19.694 13:45:18 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # IFS=.-: 00:06:19.694 13:45:18 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # read -ra ver1 00:06:19.694 13:45:18 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # IFS=.-: 00:06:19.694 13:45:18 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # read -ra ver2 00:06:19.694 13:45:18 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@338 -- # local 'op=<' 00:06:19.694 13:45:18 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@340 -- # ver1_l=2 00:06:19.694 13:45:18 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@341 -- # ver2_l=1 00:06:19.694 13:45:18 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:19.694 13:45:18 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@344 -- # case "$op" in 00:06:19.694 13:45:18 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@345 -- # : 1 00:06:19.694 13:45:18 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:19.694 13:45:18 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:19.694 13:45:18 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # decimal 1 00:06:19.694 13:45:18 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=1 00:06:19.694 13:45:18 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:19.694 13:45:18 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 1 00:06:19.694 13:45:18 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # ver1[v]=1 00:06:19.694 13:45:18 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # decimal 2 00:06:19.694 13:45:18 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=2 00:06:19.694 13:45:18 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:19.694 13:45:18 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 2 00:06:19.694 13:45:18 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # ver2[v]=2 00:06:19.694 13:45:18 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:19.694 13:45:18 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:19.694 13:45:18 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # return 0 00:06:19.694 13:45:18 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:19.694 13:45:18 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:19.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.694 --rc genhtml_branch_coverage=1 00:06:19.694 --rc genhtml_function_coverage=1 00:06:19.694 --rc genhtml_legend=1 00:06:19.694 --rc geninfo_all_blocks=1 00:06:19.694 --rc geninfo_unexecuted_blocks=1 00:06:19.694 00:06:19.694 ' 00:06:19.694 13:45:18 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:19.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.694 --rc genhtml_branch_coverage=1 00:06:19.694 --rc genhtml_function_coverage=1 00:06:19.694 --rc genhtml_legend=1 00:06:19.694 --rc geninfo_all_blocks=1 00:06:19.694 --rc geninfo_unexecuted_blocks=1 00:06:19.694 00:06:19.694 ' 00:06:19.694 13:45:18 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:19.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.694 --rc genhtml_branch_coverage=1 00:06:19.694 --rc genhtml_function_coverage=1 00:06:19.694 --rc genhtml_legend=1 00:06:19.694 --rc geninfo_all_blocks=1 00:06:19.694 --rc geninfo_unexecuted_blocks=1 00:06:19.694 00:06:19.694 ' 00:06:19.694 13:45:18 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:19.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.694 --rc genhtml_branch_coverage=1 00:06:19.694 --rc genhtml_function_coverage=1 00:06:19.694 --rc genhtml_legend=1 00:06:19.694 --rc geninfo_all_blocks=1 00:06:19.694 --rc geninfo_unexecuted_blocks=1 00:06:19.694 00:06:19.694 ' 00:06:19.694 13:45:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:19.694 13:45:18 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@15 -- # shopt -s extglob 00:06:19.694 13:45:18 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:19.694 13:45:18 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:19.694 13:45:18 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:19.694 13:45:18 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.694 13:45:18 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.694 13:45:18 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.694 13:45:18 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:06:19.694 13:45:18 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.694 13:45:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:06:19.695 13:45:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:06:19.695 13:45:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:06:19.695 13:45:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:06:19.695 13:45:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:06:19.695 13:45:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:06:19.695 13:45:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:06:19.695 13:45:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:06:19.695 13:45:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:06:19.695 13:45:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:06:19.695 13:45:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:06:19.695 13:45:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:06:19.695 13:45:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:06:19.695 13:45:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:06:19.695 13:45:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:19.695 13:45:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:19.695 13:45:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:06:19.695 13:45:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:06:19.695 13:45:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:06:19.695 13:45:18 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:06:19.695 13:45:18 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:19.695 13:45:18 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:19.695 ************************************ 00:06:19.695 START TEST dd_inflate_file 00:06:19.695 ************************************ 00:06:19.695 13:45:18 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:06:19.695 [2024-12-06 13:45:19.044480] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:06:19.695 [2024-12-06 13:45:19.044713] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60825 ] 00:06:19.954 [2024-12-06 13:45:19.188606] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.954 [2024-12-06 13:45:19.236767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.954 [2024-12-06 13:45:19.303206] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:20.214  [2024-12-06T13:45:19.618Z] Copying: 64/64 [MB] (average 1361 MBps) 00:06:20.214 00:06:20.473 00:06:20.473 real 0m0.626s 00:06:20.473 user 0m0.351s 00:06:20.473 sys 0m0.363s 00:06:20.473 13:45:19 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:20.473 13:45:19 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:06:20.473 ************************************ 00:06:20.473 END TEST dd_inflate_file 00:06:20.473 ************************************ 00:06:20.473 13:45:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:06:20.473 13:45:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:06:20.473 13:45:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:06:20.473 13:45:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:06:20.473 13:45:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:06:20.473 13:45:19 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:06:20.473 13:45:19 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:20.473 13:45:19 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:20.473 13:45:19 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:20.473 ************************************ 00:06:20.473 START TEST dd_copy_to_out_bdev 00:06:20.473 ************************************ 00:06:20.473 13:45:19 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:06:20.473 [2024-12-06 13:45:19.729823] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:06:20.473 { 00:06:20.473 "subsystems": [ 00:06:20.473 { 00:06:20.473 "subsystem": "bdev", 00:06:20.473 "config": [ 00:06:20.473 { 00:06:20.473 "params": { 00:06:20.473 "trtype": "pcie", 00:06:20.473 "traddr": "0000:00:10.0", 00:06:20.473 "name": "Nvme0" 00:06:20.473 }, 00:06:20.473 "method": "bdev_nvme_attach_controller" 00:06:20.473 }, 00:06:20.473 { 00:06:20.473 "params": { 00:06:20.473 "trtype": "pcie", 00:06:20.473 "traddr": "0000:00:11.0", 00:06:20.473 "name": "Nvme1" 00:06:20.473 }, 00:06:20.473 "method": "bdev_nvme_attach_controller" 00:06:20.473 }, 00:06:20.473 { 00:06:20.473 "method": "bdev_wait_for_examine" 00:06:20.473 } 00:06:20.473 ] 00:06:20.473 } 00:06:20.473 ] 00:06:20.473 } 00:06:20.473 [2024-12-06 13:45:19.730237] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60860 ] 00:06:20.473 [2024-12-06 13:45:19.873165] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.733 [2024-12-06 13:45:19.915468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.733 [2024-12-06 13:45:19.983604] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:22.112  [2024-12-06T13:45:21.516Z] Copying: 51/64 [MB] (51 MBps) [2024-12-06T13:45:21.776Z] Copying: 64/64 [MB] (average 50 MBps) 00:06:22.372 00:06:22.372 00:06:22.372 real 0m2.023s 00:06:22.372 user 0m1.779s 00:06:22.372 sys 0m1.679s 00:06:22.372 13:45:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:22.372 13:45:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:22.372 ************************************ 00:06:22.372 END TEST dd_copy_to_out_bdev 00:06:22.372 ************************************ 00:06:22.372 13:45:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:06:22.372 13:45:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:06:22.372 13:45:21 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:22.372 13:45:21 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:22.372 13:45:21 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:22.372 ************************************ 00:06:22.372 START TEST dd_offset_magic 00:06:22.372 ************************************ 00:06:22.372 13:45:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1129 -- # offset_magic 00:06:22.372 13:45:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:06:22.372 13:45:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:06:22.372 13:45:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:06:22.372 13:45:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:06:22.372 13:45:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:06:22.372 13:45:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:06:22.372 13:45:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:22.372 13:45:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:22.631 [2024-12-06 13:45:21.811479] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:06:22.631 [2024-12-06 13:45:21.811575] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60904 ] 00:06:22.631 { 00:06:22.631 "subsystems": [ 00:06:22.631 { 00:06:22.631 "subsystem": "bdev", 00:06:22.631 "config": [ 00:06:22.631 { 00:06:22.631 "params": { 00:06:22.631 "trtype": "pcie", 00:06:22.631 "traddr": "0000:00:10.0", 00:06:22.631 "name": "Nvme0" 00:06:22.631 }, 00:06:22.631 "method": "bdev_nvme_attach_controller" 00:06:22.631 }, 00:06:22.631 { 00:06:22.631 "params": { 00:06:22.631 "trtype": "pcie", 00:06:22.631 "traddr": "0000:00:11.0", 00:06:22.631 "name": "Nvme1" 00:06:22.631 }, 00:06:22.631 "method": "bdev_nvme_attach_controller" 00:06:22.631 }, 00:06:22.631 { 00:06:22.631 "method": "bdev_wait_for_examine" 00:06:22.631 } 00:06:22.631 ] 00:06:22.631 } 00:06:22.631 ] 00:06:22.631 } 00:06:22.631 [2024-12-06 13:45:21.952680] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.631 [2024-12-06 13:45:21.996297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.891 [2024-12-06 13:45:22.064195] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:23.150  [2024-12-06T13:45:22.814Z] Copying: 65/65 [MB] (average 833 MBps) 00:06:23.410 00:06:23.410 13:45:22 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:06:23.410 13:45:22 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:06:23.410 13:45:22 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:23.410 13:45:22 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:23.410 [2024-12-06 13:45:22.656666] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:06:23.410 [2024-12-06 13:45:22.657233] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60924 ] 00:06:23.410 { 00:06:23.410 "subsystems": [ 00:06:23.410 { 00:06:23.410 "subsystem": "bdev", 00:06:23.410 "config": [ 00:06:23.410 { 00:06:23.410 "params": { 00:06:23.410 "trtype": "pcie", 00:06:23.410 "traddr": "0000:00:10.0", 00:06:23.410 "name": "Nvme0" 00:06:23.410 }, 00:06:23.410 "method": "bdev_nvme_attach_controller" 00:06:23.410 }, 00:06:23.410 { 00:06:23.410 "params": { 00:06:23.410 "trtype": "pcie", 00:06:23.410 "traddr": "0000:00:11.0", 00:06:23.410 "name": "Nvme1" 00:06:23.410 }, 00:06:23.410 "method": "bdev_nvme_attach_controller" 00:06:23.410 }, 00:06:23.410 { 00:06:23.410 "method": "bdev_wait_for_examine" 00:06:23.410 } 00:06:23.410 ] 00:06:23.410 } 00:06:23.410 ] 00:06:23.410 } 00:06:23.410 [2024-12-06 13:45:22.800814] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.669 [2024-12-06 13:45:22.844430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.669 [2024-12-06 13:45:22.921466] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:23.928  [2024-12-06T13:45:23.332Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:23.928 00:06:24.188 13:45:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:06:24.188 13:45:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:06:24.188 13:45:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:06:24.188 13:45:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:06:24.188 13:45:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:06:24.188 13:45:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:24.188 13:45:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:24.188 [2024-12-06 13:45:23.390815] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:06:24.188 [2024-12-06 13:45:23.391165] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60947 ] 00:06:24.188 { 00:06:24.188 "subsystems": [ 00:06:24.188 { 00:06:24.188 "subsystem": "bdev", 00:06:24.188 "config": [ 00:06:24.188 { 00:06:24.188 "params": { 00:06:24.188 "trtype": "pcie", 00:06:24.188 "traddr": "0000:00:10.0", 00:06:24.188 "name": "Nvme0" 00:06:24.188 }, 00:06:24.188 "method": "bdev_nvme_attach_controller" 00:06:24.188 }, 00:06:24.188 { 00:06:24.188 "params": { 00:06:24.188 "trtype": "pcie", 00:06:24.188 "traddr": "0000:00:11.0", 00:06:24.188 "name": "Nvme1" 00:06:24.188 }, 00:06:24.188 "method": "bdev_nvme_attach_controller" 00:06:24.188 }, 00:06:24.188 { 00:06:24.188 "method": "bdev_wait_for_examine" 00:06:24.188 } 00:06:24.188 ] 00:06:24.188 } 00:06:24.188 ] 00:06:24.188 } 00:06:24.188 [2024-12-06 13:45:23.535887] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.188 [2024-12-06 13:45:23.577205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.459 [2024-12-06 13:45:23.646995] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:24.719  [2024-12-06T13:45:24.382Z] Copying: 65/65 [MB] (average 902 MBps) 00:06:24.978 00:06:24.978 13:45:24 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:06:24.978 13:45:24 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:06:24.978 13:45:24 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:24.978 13:45:24 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:24.978 [2024-12-06 13:45:24.233995] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:06:24.978 [2024-12-06 13:45:24.234294] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60961 ] 00:06:24.978 { 00:06:24.978 "subsystems": [ 00:06:24.978 { 00:06:24.978 "subsystem": "bdev", 00:06:24.978 "config": [ 00:06:24.978 { 00:06:24.978 "params": { 00:06:24.978 "trtype": "pcie", 00:06:24.978 "traddr": "0000:00:10.0", 00:06:24.978 "name": "Nvme0" 00:06:24.978 }, 00:06:24.978 "method": "bdev_nvme_attach_controller" 00:06:24.978 }, 00:06:24.978 { 00:06:24.978 "params": { 00:06:24.978 "trtype": "pcie", 00:06:24.978 "traddr": "0000:00:11.0", 00:06:24.978 "name": "Nvme1" 00:06:24.978 }, 00:06:24.978 "method": "bdev_nvme_attach_controller" 00:06:24.978 }, 00:06:24.978 { 00:06:24.978 "method": "bdev_wait_for_examine" 00:06:24.978 } 00:06:24.978 ] 00:06:24.978 } 00:06:24.978 ] 00:06:24.978 } 00:06:24.978 [2024-12-06 13:45:24.376629] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.238 [2024-12-06 13:45:24.419363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.238 [2024-12-06 13:45:24.487537] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:25.497  [2024-12-06T13:45:24.901Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:06:25.497 00:06:25.756 ************************************ 00:06:25.756 END TEST dd_offset_magic 00:06:25.756 ************************************ 00:06:25.756 13:45:24 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:06:25.756 13:45:24 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:06:25.756 00:06:25.756 real 0m3.143s 00:06:25.756 user 0m2.226s 00:06:25.756 sys 0m1.061s 00:06:25.756 13:45:24 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:25.757 13:45:24 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:25.757 13:45:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:06:25.757 13:45:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:06:25.757 13:45:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:25.757 13:45:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:06:25.757 13:45:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:06:25.757 13:45:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:06:25.757 13:45:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:06:25.757 13:45:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:06:25.757 13:45:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:06:25.757 13:45:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:06:25.757 13:45:24 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:25.757 [2024-12-06 13:45:25.001771] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:06:25.757 [2024-12-06 13:45:25.001866] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60993 ] 00:06:25.757 { 00:06:25.757 "subsystems": [ 00:06:25.757 { 00:06:25.757 "subsystem": "bdev", 00:06:25.757 "config": [ 00:06:25.757 { 00:06:25.757 "params": { 00:06:25.757 "trtype": "pcie", 00:06:25.757 "traddr": "0000:00:10.0", 00:06:25.757 "name": "Nvme0" 00:06:25.757 }, 00:06:25.757 "method": "bdev_nvme_attach_controller" 00:06:25.757 }, 00:06:25.757 { 00:06:25.757 "params": { 00:06:25.757 "trtype": "pcie", 00:06:25.757 "traddr": "0000:00:11.0", 00:06:25.757 "name": "Nvme1" 00:06:25.757 }, 00:06:25.757 "method": "bdev_nvme_attach_controller" 00:06:25.757 }, 00:06:25.757 { 00:06:25.757 "method": "bdev_wait_for_examine" 00:06:25.757 } 00:06:25.757 ] 00:06:25.757 } 00:06:25.757 ] 00:06:25.757 } 00:06:25.757 [2024-12-06 13:45:25.145307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.018 [2024-12-06 13:45:25.192039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.018 [2024-12-06 13:45:25.259721] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:26.276  [2024-12-06T13:45:25.680Z] Copying: 5120/5120 [kB] (average 1000 MBps) 00:06:26.276 00:06:26.276 13:45:25 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:06:26.276 13:45:25 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:06:26.276 13:45:25 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:06:26.276 13:45:25 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:06:26.276 13:45:25 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:06:26.276 13:45:25 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:06:26.276 13:45:25 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:06:26.276 13:45:25 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:06:26.276 13:45:25 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:06:26.276 13:45:25 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:26.535 [2024-12-06 13:45:25.731484] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:06:26.535 [2024-12-06 13:45:25.731574] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61014 ] 00:06:26.535 { 00:06:26.535 "subsystems": [ 00:06:26.535 { 00:06:26.535 "subsystem": "bdev", 00:06:26.535 "config": [ 00:06:26.535 { 00:06:26.535 "params": { 00:06:26.535 "trtype": "pcie", 00:06:26.535 "traddr": "0000:00:10.0", 00:06:26.535 "name": "Nvme0" 00:06:26.535 }, 00:06:26.535 "method": "bdev_nvme_attach_controller" 00:06:26.535 }, 00:06:26.535 { 00:06:26.535 "params": { 00:06:26.535 "trtype": "pcie", 00:06:26.535 "traddr": "0000:00:11.0", 00:06:26.535 "name": "Nvme1" 00:06:26.535 }, 00:06:26.535 "method": "bdev_nvme_attach_controller" 00:06:26.535 }, 00:06:26.535 { 00:06:26.535 "method": "bdev_wait_for_examine" 00:06:26.535 } 00:06:26.535 ] 00:06:26.535 } 00:06:26.535 ] 00:06:26.535 } 00:06:26.535 [2024-12-06 13:45:25.876310] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.535 [2024-12-06 13:45:25.920338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.793 [2024-12-06 13:45:25.989497] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:26.793  [2024-12-06T13:45:26.455Z] Copying: 5120/5120 [kB] (average 833 MBps) 00:06:27.051 00:06:27.051 13:45:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:06:27.051 00:06:27.051 real 0m7.640s 00:06:27.051 user 0m5.559s 00:06:27.051 sys 0m3.892s 00:06:27.051 ************************************ 00:06:27.051 END TEST spdk_dd_bdev_to_bdev 00:06:27.051 ************************************ 00:06:27.051 13:45:26 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:27.051 13:45:26 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:27.309 13:45:26 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:06:27.309 13:45:26 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:06:27.309 13:45:26 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:27.309 13:45:26 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:27.309 13:45:26 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:27.309 ************************************ 00:06:27.309 START TEST spdk_dd_uring 00:06:27.309 ************************************ 00:06:27.309 13:45:26 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:06:27.309 * Looking for test storage... 00:06:27.309 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:27.309 13:45:26 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:27.309 13:45:26 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1711 -- # lcov --version 00:06:27.309 13:45:26 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:27.309 13:45:26 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:27.309 13:45:26 spdk_dd.spdk_dd_uring -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:27.309 13:45:26 spdk_dd.spdk_dd_uring -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:27.309 13:45:26 spdk_dd.spdk_dd_uring -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:27.309 13:45:26 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # IFS=.-: 00:06:27.309 13:45:26 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # read -ra ver1 00:06:27.309 13:45:26 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # IFS=.-: 00:06:27.309 13:45:26 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # read -ra ver2 00:06:27.309 13:45:26 spdk_dd.spdk_dd_uring -- scripts/common.sh@338 -- # local 'op=<' 00:06:27.309 13:45:26 spdk_dd.spdk_dd_uring -- scripts/common.sh@340 -- # ver1_l=2 00:06:27.309 13:45:26 spdk_dd.spdk_dd_uring -- scripts/common.sh@341 -- # ver2_l=1 00:06:27.309 13:45:26 spdk_dd.spdk_dd_uring -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:27.309 13:45:26 spdk_dd.spdk_dd_uring -- scripts/common.sh@344 -- # case "$op" in 00:06:27.309 13:45:26 spdk_dd.spdk_dd_uring -- scripts/common.sh@345 -- # : 1 00:06:27.309 13:45:26 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:27.309 13:45:26 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:27.309 13:45:26 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # decimal 1 00:06:27.309 13:45:26 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=1 00:06:27.309 13:45:26 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:27.309 13:45:26 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 1 00:06:27.309 13:45:26 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # ver1[v]=1 00:06:27.309 13:45:26 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # decimal 2 00:06:27.309 13:45:26 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=2 00:06:27.309 13:45:26 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:27.309 13:45:26 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 2 00:06:27.309 13:45:26 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # ver2[v]=2 00:06:27.309 13:45:26 spdk_dd.spdk_dd_uring -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:27.309 13:45:26 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:27.309 13:45:26 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # return 0 00:06:27.309 13:45:26 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:27.309 13:45:26 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:27.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.309 --rc genhtml_branch_coverage=1 00:06:27.309 --rc genhtml_function_coverage=1 00:06:27.309 --rc genhtml_legend=1 00:06:27.309 --rc geninfo_all_blocks=1 00:06:27.309 --rc geninfo_unexecuted_blocks=1 00:06:27.309 00:06:27.309 ' 00:06:27.309 13:45:26 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:27.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.309 --rc genhtml_branch_coverage=1 00:06:27.309 --rc genhtml_function_coverage=1 00:06:27.309 --rc genhtml_legend=1 00:06:27.309 --rc geninfo_all_blocks=1 00:06:27.309 --rc geninfo_unexecuted_blocks=1 00:06:27.309 00:06:27.309 ' 00:06:27.309 13:45:26 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:27.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.309 --rc genhtml_branch_coverage=1 00:06:27.309 --rc genhtml_function_coverage=1 00:06:27.309 --rc genhtml_legend=1 00:06:27.309 --rc geninfo_all_blocks=1 00:06:27.309 --rc geninfo_unexecuted_blocks=1 00:06:27.309 00:06:27.309 ' 00:06:27.309 13:45:26 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:27.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.309 --rc genhtml_branch_coverage=1 00:06:27.309 --rc genhtml_function_coverage=1 00:06:27.309 --rc genhtml_legend=1 00:06:27.309 --rc geninfo_all_blocks=1 00:06:27.309 --rc geninfo_unexecuted_blocks=1 00:06:27.309 00:06:27.309 ' 00:06:27.309 13:45:26 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:27.309 13:45:26 spdk_dd.spdk_dd_uring -- scripts/common.sh@15 -- # shopt -s extglob 00:06:27.309 13:45:26 spdk_dd.spdk_dd_uring -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:27.309 13:45:26 spdk_dd.spdk_dd_uring -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:27.309 13:45:26 spdk_dd.spdk_dd_uring -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:27.309 13:45:26 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.310 13:45:26 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.310 13:45:26 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.310 13:45:26 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:06:27.310 13:45:26 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.310 13:45:26 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:06:27.310 13:45:26 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:27.310 13:45:26 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:27.310 13:45:26 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:06:27.310 ************************************ 00:06:27.310 START TEST dd_uring_copy 00:06:27.310 ************************************ 00:06:27.310 13:45:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1129 -- # uring_zram_copy 00:06:27.310 13:45:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:06:27.310 13:45:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:06:27.310 13:45:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:06:27.310 13:45:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:06:27.310 13:45:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:06:27.310 13:45:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:06:27.310 13:45:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@159 -- # [[ -e /sys/class/zram-control ]] 00:06:27.310 13:45:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@160 -- # return 00:06:27.310 13:45:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:06:27.310 13:45:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # cat /sys/class/zram-control/hot_add 00:06:27.310 13:45:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:06:27.310 13:45:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:06:27.310 13:45:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # local id=1 00:06:27.310 13:45:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@178 -- # local size=512M 00:06:27.310 13:45:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@180 -- # [[ -e /sys/block/zram1 ]] 00:06:27.310 13:45:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # echo 512M 00:06:27.310 13:45:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:06:27.310 13:45:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:06:27.310 13:45:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:06:27.310 13:45:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:06:27.310 13:45:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:06:27.310 13:45:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:06:27.310 13:45:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:06:27.310 13:45:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:06:27.310 13:45:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:27.570 13:45:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=pj71agkqrqhh3jvedciucy35hv9z56dn0y0w3d2a62zsfgn3iadu8kkln9nawth9lqm8fdltmgw81f1gh6o48oudyv66o779mx1flr3imdg8l1fbaufhp43hsazx32koe6dui24dc6vwzxy6l7xuin3kqji9egvm0ex027n5vw6wikcbkh0h9q5ry3cfpgmtatui6sr9pp8bpukhcnik5igqds4zs7bncv3piar79yhzjdoxdn8o435ctup6icj1t2nlyrt8damr70iqcmyeio3m7lcf9ae9l3mw7gd4srn8mu6ler0ptkhufob2sglgz6yhukaa7msri7nmdplduko84psqn73agku7gf2xopjreocbnox4h5kz8bru3ck67xdlqcdprvvy8mb80b8z9qugx0d3k013nogv6sxyw8xwykkzp38lg4kc5wclthod33rhe8r4e5wffb7jynux1ihis4u6z6xpccg97q93vsvnkg1viebh1qzwklt1d1mk94m05y9ytjwo7pt8d3czbnqcs6sx88qxumyxx6i6ysevf78ojmvpxx76veeghmfyrlre703j8pqbbwns4yfiphzng6z9r3yon3bfactvzk9o2b6pbwm9byss1ay0r5l5c037ol9lzhkf2wsbfkbs4aybh3l20fi2xef36e2c3od9ucnhiv9kouxv2csss5qg85lvbnufj8j89nwcmk3i85rky7ys1ncioxf2bo7ufwbv0tjiyuzi2ngmmwln0cv96t6ezi2dp480ka5dhppyfr6ddcjmh3359xi8l9y65874qiuewnowzt208tkv3ttchv12mtfv740fjy1zlh6th3kv4pwv9yyi5wxhr8jsqqp71y2eez1igic7q63hlm134z9ho0clld94imwjr7hmxgcljfo4f4ckr6zick25pv1ba0zwlr259uzz3s3xsuqogjwgxwwoir9v7g8a09kc6t8er5ncmolgyfcdxphcb6wyt9vd2g82uxxd5hcd4jpg 00:06:27.570 13:45:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo pj71agkqrqhh3jvedciucy35hv9z56dn0y0w3d2a62zsfgn3iadu8kkln9nawth9lqm8fdltmgw81f1gh6o48oudyv66o779mx1flr3imdg8l1fbaufhp43hsazx32koe6dui24dc6vwzxy6l7xuin3kqji9egvm0ex027n5vw6wikcbkh0h9q5ry3cfpgmtatui6sr9pp8bpukhcnik5igqds4zs7bncv3piar79yhzjdoxdn8o435ctup6icj1t2nlyrt8damr70iqcmyeio3m7lcf9ae9l3mw7gd4srn8mu6ler0ptkhufob2sglgz6yhukaa7msri7nmdplduko84psqn73agku7gf2xopjreocbnox4h5kz8bru3ck67xdlqcdprvvy8mb80b8z9qugx0d3k013nogv6sxyw8xwykkzp38lg4kc5wclthod33rhe8r4e5wffb7jynux1ihis4u6z6xpccg97q93vsvnkg1viebh1qzwklt1d1mk94m05y9ytjwo7pt8d3czbnqcs6sx88qxumyxx6i6ysevf78ojmvpxx76veeghmfyrlre703j8pqbbwns4yfiphzng6z9r3yon3bfactvzk9o2b6pbwm9byss1ay0r5l5c037ol9lzhkf2wsbfkbs4aybh3l20fi2xef36e2c3od9ucnhiv9kouxv2csss5qg85lvbnufj8j89nwcmk3i85rky7ys1ncioxf2bo7ufwbv0tjiyuzi2ngmmwln0cv96t6ezi2dp480ka5dhppyfr6ddcjmh3359xi8l9y65874qiuewnowzt208tkv3ttchv12mtfv740fjy1zlh6th3kv4pwv9yyi5wxhr8jsqqp71y2eez1igic7q63hlm134z9ho0clld94imwjr7hmxgcljfo4f4ckr6zick25pv1ba0zwlr259uzz3s3xsuqogjwgxwwoir9v7g8a09kc6t8er5ncmolgyfcdxphcb6wyt9vd2g82uxxd5hcd4jpg 00:06:27.570 13:45:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:06:27.570 [2024-12-06 13:45:26.772713] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:06:27.570 [2024-12-06 13:45:26.772982] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61092 ] 00:06:27.570 [2024-12-06 13:45:26.917025] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.570 [2024-12-06 13:45:26.966478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.829 [2024-12-06 13:45:27.035229] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:28.397  [2024-12-06T13:45:28.370Z] Copying: 511/511 [MB] (average 1011 MBps) 00:06:28.966 00:06:28.966 13:45:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:06:28.966 13:45:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:06:28.966 13:45:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:28.966 13:45:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:28.966 [2024-12-06 13:45:28.353423] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:06:28.966 [2024-12-06 13:45:28.353521] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61113 ] 00:06:28.966 { 00:06:28.966 "subsystems": [ 00:06:28.966 { 00:06:28.966 "subsystem": "bdev", 00:06:28.966 "config": [ 00:06:28.966 { 00:06:28.966 "params": { 00:06:28.966 "block_size": 512, 00:06:28.966 "num_blocks": 1048576, 00:06:28.966 "name": "malloc0" 00:06:28.966 }, 00:06:28.966 "method": "bdev_malloc_create" 00:06:28.966 }, 00:06:28.966 { 00:06:28.966 "params": { 00:06:28.966 "filename": "/dev/zram1", 00:06:28.966 "name": "uring0" 00:06:28.966 }, 00:06:28.966 "method": "bdev_uring_create" 00:06:28.966 }, 00:06:28.966 { 00:06:28.966 "method": "bdev_wait_for_examine" 00:06:28.966 } 00:06:28.966 ] 00:06:28.966 } 00:06:28.966 ] 00:06:28.966 } 00:06:29.224 [2024-12-06 13:45:28.496717] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.224 [2024-12-06 13:45:28.537764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.224 [2024-12-06 13:45:28.604769] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:30.601  [2024-12-06T13:45:30.938Z] Copying: 270/512 [MB] (270 MBps) [2024-12-06T13:45:31.505Z] Copying: 512/512 [MB] (average 271 MBps) 00:06:32.101 00:06:32.101 13:45:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:06:32.101 13:45:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:06:32.101 13:45:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:32.101 13:45:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:32.101 { 00:06:32.101 "subsystems": [ 00:06:32.101 { 00:06:32.101 "subsystem": "bdev", 00:06:32.101 "config": [ 00:06:32.101 { 00:06:32.101 "params": { 00:06:32.101 "block_size": 512, 00:06:32.101 "num_blocks": 1048576, 00:06:32.101 "name": "malloc0" 00:06:32.101 }, 00:06:32.101 "method": "bdev_malloc_create" 00:06:32.101 }, 00:06:32.101 { 00:06:32.101 "params": { 00:06:32.101 "filename": "/dev/zram1", 00:06:32.101 "name": "uring0" 00:06:32.101 }, 00:06:32.101 "method": "bdev_uring_create" 00:06:32.101 }, 00:06:32.101 { 00:06:32.101 "method": "bdev_wait_for_examine" 00:06:32.101 } 00:06:32.101 ] 00:06:32.101 } 00:06:32.101 ] 00:06:32.101 } 00:06:32.101 [2024-12-06 13:45:31.294755] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:06:32.101 [2024-12-06 13:45:31.294857] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61152 ] 00:06:32.101 [2024-12-06 13:45:31.439924] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.101 [2024-12-06 13:45:31.495542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.359 [2024-12-06 13:45:31.566030] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:33.735  [2024-12-06T13:45:34.076Z] Copying: 163/512 [MB] (163 MBps) [2024-12-06T13:45:35.014Z] Copying: 318/512 [MB] (155 MBps) [2024-12-06T13:45:35.014Z] Copying: 477/512 [MB] (158 MBps) [2024-12-06T13:45:35.590Z] Copying: 512/512 [MB] (average 160 MBps) 00:06:36.186 00:06:36.186 13:45:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:06:36.186 13:45:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ pj71agkqrqhh3jvedciucy35hv9z56dn0y0w3d2a62zsfgn3iadu8kkln9nawth9lqm8fdltmgw81f1gh6o48oudyv66o779mx1flr3imdg8l1fbaufhp43hsazx32koe6dui24dc6vwzxy6l7xuin3kqji9egvm0ex027n5vw6wikcbkh0h9q5ry3cfpgmtatui6sr9pp8bpukhcnik5igqds4zs7bncv3piar79yhzjdoxdn8o435ctup6icj1t2nlyrt8damr70iqcmyeio3m7lcf9ae9l3mw7gd4srn8mu6ler0ptkhufob2sglgz6yhukaa7msri7nmdplduko84psqn73agku7gf2xopjreocbnox4h5kz8bru3ck67xdlqcdprvvy8mb80b8z9qugx0d3k013nogv6sxyw8xwykkzp38lg4kc5wclthod33rhe8r4e5wffb7jynux1ihis4u6z6xpccg97q93vsvnkg1viebh1qzwklt1d1mk94m05y9ytjwo7pt8d3czbnqcs6sx88qxumyxx6i6ysevf78ojmvpxx76veeghmfyrlre703j8pqbbwns4yfiphzng6z9r3yon3bfactvzk9o2b6pbwm9byss1ay0r5l5c037ol9lzhkf2wsbfkbs4aybh3l20fi2xef36e2c3od9ucnhiv9kouxv2csss5qg85lvbnufj8j89nwcmk3i85rky7ys1ncioxf2bo7ufwbv0tjiyuzi2ngmmwln0cv96t6ezi2dp480ka5dhppyfr6ddcjmh3359xi8l9y65874qiuewnowzt208tkv3ttchv12mtfv740fjy1zlh6th3kv4pwv9yyi5wxhr8jsqqp71y2eez1igic7q63hlm134z9ho0clld94imwjr7hmxgcljfo4f4ckr6zick25pv1ba0zwlr259uzz3s3xsuqogjwgxwwoir9v7g8a09kc6t8er5ncmolgyfcdxphcb6wyt9vd2g82uxxd5hcd4jpg == \p\j\7\1\a\g\k\q\r\q\h\h\3\j\v\e\d\c\i\u\c\y\3\5\h\v\9\z\5\6\d\n\0\y\0\w\3\d\2\a\6\2\z\s\f\g\n\3\i\a\d\u\8\k\k\l\n\9\n\a\w\t\h\9\l\q\m\8\f\d\l\t\m\g\w\8\1\f\1\g\h\6\o\4\8\o\u\d\y\v\6\6\o\7\7\9\m\x\1\f\l\r\3\i\m\d\g\8\l\1\f\b\a\u\f\h\p\4\3\h\s\a\z\x\3\2\k\o\e\6\d\u\i\2\4\d\c\6\v\w\z\x\y\6\l\7\x\u\i\n\3\k\q\j\i\9\e\g\v\m\0\e\x\0\2\7\n\5\v\w\6\w\i\k\c\b\k\h\0\h\9\q\5\r\y\3\c\f\p\g\m\t\a\t\u\i\6\s\r\9\p\p\8\b\p\u\k\h\c\n\i\k\5\i\g\q\d\s\4\z\s\7\b\n\c\v\3\p\i\a\r\7\9\y\h\z\j\d\o\x\d\n\8\o\4\3\5\c\t\u\p\6\i\c\j\1\t\2\n\l\y\r\t\8\d\a\m\r\7\0\i\q\c\m\y\e\i\o\3\m\7\l\c\f\9\a\e\9\l\3\m\w\7\g\d\4\s\r\n\8\m\u\6\l\e\r\0\p\t\k\h\u\f\o\b\2\s\g\l\g\z\6\y\h\u\k\a\a\7\m\s\r\i\7\n\m\d\p\l\d\u\k\o\8\4\p\s\q\n\7\3\a\g\k\u\7\g\f\2\x\o\p\j\r\e\o\c\b\n\o\x\4\h\5\k\z\8\b\r\u\3\c\k\6\7\x\d\l\q\c\d\p\r\v\v\y\8\m\b\8\0\b\8\z\9\q\u\g\x\0\d\3\k\0\1\3\n\o\g\v\6\s\x\y\w\8\x\w\y\k\k\z\p\3\8\l\g\4\k\c\5\w\c\l\t\h\o\d\3\3\r\h\e\8\r\4\e\5\w\f\f\b\7\j\y\n\u\x\1\i\h\i\s\4\u\6\z\6\x\p\c\c\g\9\7\q\9\3\v\s\v\n\k\g\1\v\i\e\b\h\1\q\z\w\k\l\t\1\d\1\m\k\9\4\m\0\5\y\9\y\t\j\w\o\7\p\t\8\d\3\c\z\b\n\q\c\s\6\s\x\8\8\q\x\u\m\y\x\x\6\i\6\y\s\e\v\f\7\8\o\j\m\v\p\x\x\7\6\v\e\e\g\h\m\f\y\r\l\r\e\7\0\3\j\8\p\q\b\b\w\n\s\4\y\f\i\p\h\z\n\g\6\z\9\r\3\y\o\n\3\b\f\a\c\t\v\z\k\9\o\2\b\6\p\b\w\m\9\b\y\s\s\1\a\y\0\r\5\l\5\c\0\3\7\o\l\9\l\z\h\k\f\2\w\s\b\f\k\b\s\4\a\y\b\h\3\l\2\0\f\i\2\x\e\f\3\6\e\2\c\3\o\d\9\u\c\n\h\i\v\9\k\o\u\x\v\2\c\s\s\s\5\q\g\8\5\l\v\b\n\u\f\j\8\j\8\9\n\w\c\m\k\3\i\8\5\r\k\y\7\y\s\1\n\c\i\o\x\f\2\b\o\7\u\f\w\b\v\0\t\j\i\y\u\z\i\2\n\g\m\m\w\l\n\0\c\v\9\6\t\6\e\z\i\2\d\p\4\8\0\k\a\5\d\h\p\p\y\f\r\6\d\d\c\j\m\h\3\3\5\9\x\i\8\l\9\y\6\5\8\7\4\q\i\u\e\w\n\o\w\z\t\2\0\8\t\k\v\3\t\t\c\h\v\1\2\m\t\f\v\7\4\0\f\j\y\1\z\l\h\6\t\h\3\k\v\4\p\w\v\9\y\y\i\5\w\x\h\r\8\j\s\q\q\p\7\1\y\2\e\e\z\1\i\g\i\c\7\q\6\3\h\l\m\1\3\4\z\9\h\o\0\c\l\l\d\9\4\i\m\w\j\r\7\h\m\x\g\c\l\j\f\o\4\f\4\c\k\r\6\z\i\c\k\2\5\p\v\1\b\a\0\z\w\l\r\2\5\9\u\z\z\3\s\3\x\s\u\q\o\g\j\w\g\x\w\w\o\i\r\9\v\7\g\8\a\0\9\k\c\6\t\8\e\r\5\n\c\m\o\l\g\y\f\c\d\x\p\h\c\b\6\w\y\t\9\v\d\2\g\8\2\u\x\x\d\5\h\c\d\4\j\p\g ]] 00:06:36.186 13:45:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:06:36.186 13:45:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ pj71agkqrqhh3jvedciucy35hv9z56dn0y0w3d2a62zsfgn3iadu8kkln9nawth9lqm8fdltmgw81f1gh6o48oudyv66o779mx1flr3imdg8l1fbaufhp43hsazx32koe6dui24dc6vwzxy6l7xuin3kqji9egvm0ex027n5vw6wikcbkh0h9q5ry3cfpgmtatui6sr9pp8bpukhcnik5igqds4zs7bncv3piar79yhzjdoxdn8o435ctup6icj1t2nlyrt8damr70iqcmyeio3m7lcf9ae9l3mw7gd4srn8mu6ler0ptkhufob2sglgz6yhukaa7msri7nmdplduko84psqn73agku7gf2xopjreocbnox4h5kz8bru3ck67xdlqcdprvvy8mb80b8z9qugx0d3k013nogv6sxyw8xwykkzp38lg4kc5wclthod33rhe8r4e5wffb7jynux1ihis4u6z6xpccg97q93vsvnkg1viebh1qzwklt1d1mk94m05y9ytjwo7pt8d3czbnqcs6sx88qxumyxx6i6ysevf78ojmvpxx76veeghmfyrlre703j8pqbbwns4yfiphzng6z9r3yon3bfactvzk9o2b6pbwm9byss1ay0r5l5c037ol9lzhkf2wsbfkbs4aybh3l20fi2xef36e2c3od9ucnhiv9kouxv2csss5qg85lvbnufj8j89nwcmk3i85rky7ys1ncioxf2bo7ufwbv0tjiyuzi2ngmmwln0cv96t6ezi2dp480ka5dhppyfr6ddcjmh3359xi8l9y65874qiuewnowzt208tkv3ttchv12mtfv740fjy1zlh6th3kv4pwv9yyi5wxhr8jsqqp71y2eez1igic7q63hlm134z9ho0clld94imwjr7hmxgcljfo4f4ckr6zick25pv1ba0zwlr259uzz3s3xsuqogjwgxwwoir9v7g8a09kc6t8er5ncmolgyfcdxphcb6wyt9vd2g82uxxd5hcd4jpg == \p\j\7\1\a\g\k\q\r\q\h\h\3\j\v\e\d\c\i\u\c\y\3\5\h\v\9\z\5\6\d\n\0\y\0\w\3\d\2\a\6\2\z\s\f\g\n\3\i\a\d\u\8\k\k\l\n\9\n\a\w\t\h\9\l\q\m\8\f\d\l\t\m\g\w\8\1\f\1\g\h\6\o\4\8\o\u\d\y\v\6\6\o\7\7\9\m\x\1\f\l\r\3\i\m\d\g\8\l\1\f\b\a\u\f\h\p\4\3\h\s\a\z\x\3\2\k\o\e\6\d\u\i\2\4\d\c\6\v\w\z\x\y\6\l\7\x\u\i\n\3\k\q\j\i\9\e\g\v\m\0\e\x\0\2\7\n\5\v\w\6\w\i\k\c\b\k\h\0\h\9\q\5\r\y\3\c\f\p\g\m\t\a\t\u\i\6\s\r\9\p\p\8\b\p\u\k\h\c\n\i\k\5\i\g\q\d\s\4\z\s\7\b\n\c\v\3\p\i\a\r\7\9\y\h\z\j\d\o\x\d\n\8\o\4\3\5\c\t\u\p\6\i\c\j\1\t\2\n\l\y\r\t\8\d\a\m\r\7\0\i\q\c\m\y\e\i\o\3\m\7\l\c\f\9\a\e\9\l\3\m\w\7\g\d\4\s\r\n\8\m\u\6\l\e\r\0\p\t\k\h\u\f\o\b\2\s\g\l\g\z\6\y\h\u\k\a\a\7\m\s\r\i\7\n\m\d\p\l\d\u\k\o\8\4\p\s\q\n\7\3\a\g\k\u\7\g\f\2\x\o\p\j\r\e\o\c\b\n\o\x\4\h\5\k\z\8\b\r\u\3\c\k\6\7\x\d\l\q\c\d\p\r\v\v\y\8\m\b\8\0\b\8\z\9\q\u\g\x\0\d\3\k\0\1\3\n\o\g\v\6\s\x\y\w\8\x\w\y\k\k\z\p\3\8\l\g\4\k\c\5\w\c\l\t\h\o\d\3\3\r\h\e\8\r\4\e\5\w\f\f\b\7\j\y\n\u\x\1\i\h\i\s\4\u\6\z\6\x\p\c\c\g\9\7\q\9\3\v\s\v\n\k\g\1\v\i\e\b\h\1\q\z\w\k\l\t\1\d\1\m\k\9\4\m\0\5\y\9\y\t\j\w\o\7\p\t\8\d\3\c\z\b\n\q\c\s\6\s\x\8\8\q\x\u\m\y\x\x\6\i\6\y\s\e\v\f\7\8\o\j\m\v\p\x\x\7\6\v\e\e\g\h\m\f\y\r\l\r\e\7\0\3\j\8\p\q\b\b\w\n\s\4\y\f\i\p\h\z\n\g\6\z\9\r\3\y\o\n\3\b\f\a\c\t\v\z\k\9\o\2\b\6\p\b\w\m\9\b\y\s\s\1\a\y\0\r\5\l\5\c\0\3\7\o\l\9\l\z\h\k\f\2\w\s\b\f\k\b\s\4\a\y\b\h\3\l\2\0\f\i\2\x\e\f\3\6\e\2\c\3\o\d\9\u\c\n\h\i\v\9\k\o\u\x\v\2\c\s\s\s\5\q\g\8\5\l\v\b\n\u\f\j\8\j\8\9\n\w\c\m\k\3\i\8\5\r\k\y\7\y\s\1\n\c\i\o\x\f\2\b\o\7\u\f\w\b\v\0\t\j\i\y\u\z\i\2\n\g\m\m\w\l\n\0\c\v\9\6\t\6\e\z\i\2\d\p\4\8\0\k\a\5\d\h\p\p\y\f\r\6\d\d\c\j\m\h\3\3\5\9\x\i\8\l\9\y\6\5\8\7\4\q\i\u\e\w\n\o\w\z\t\2\0\8\t\k\v\3\t\t\c\h\v\1\2\m\t\f\v\7\4\0\f\j\y\1\z\l\h\6\t\h\3\k\v\4\p\w\v\9\y\y\i\5\w\x\h\r\8\j\s\q\q\p\7\1\y\2\e\e\z\1\i\g\i\c\7\q\6\3\h\l\m\1\3\4\z\9\h\o\0\c\l\l\d\9\4\i\m\w\j\r\7\h\m\x\g\c\l\j\f\o\4\f\4\c\k\r\6\z\i\c\k\2\5\p\v\1\b\a\0\z\w\l\r\2\5\9\u\z\z\3\s\3\x\s\u\q\o\g\j\w\g\x\w\w\o\i\r\9\v\7\g\8\a\0\9\k\c\6\t\8\e\r\5\n\c\m\o\l\g\y\f\c\d\x\p\h\c\b\6\w\y\t\9\v\d\2\g\8\2\u\x\x\d\5\h\c\d\4\j\p\g ]] 00:06:36.186 13:45:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:06:36.754 13:45:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:06:36.754 13:45:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:06:36.754 13:45:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:36.754 13:45:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:36.754 [2024-12-06 13:45:35.974905] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:06:36.754 [2024-12-06 13:45:35.974996] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61227 ] 00:06:36.754 { 00:06:36.754 "subsystems": [ 00:06:36.754 { 00:06:36.754 "subsystem": "bdev", 00:06:36.754 "config": [ 00:06:36.754 { 00:06:36.754 "params": { 00:06:36.754 "block_size": 512, 00:06:36.754 "num_blocks": 1048576, 00:06:36.754 "name": "malloc0" 00:06:36.754 }, 00:06:36.754 "method": "bdev_malloc_create" 00:06:36.754 }, 00:06:36.754 { 00:06:36.754 "params": { 00:06:36.754 "filename": "/dev/zram1", 00:06:36.754 "name": "uring0" 00:06:36.754 }, 00:06:36.754 "method": "bdev_uring_create" 00:06:36.754 }, 00:06:36.755 { 00:06:36.755 "method": "bdev_wait_for_examine" 00:06:36.755 } 00:06:36.755 ] 00:06:36.755 } 00:06:36.755 ] 00:06:36.755 } 00:06:36.755 [2024-12-06 13:45:36.113302] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.014 [2024-12-06 13:45:36.165551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.014 [2024-12-06 13:45:36.234772] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:38.393  [2024-12-06T13:45:38.735Z] Copying: 185/512 [MB] (185 MBps) [2024-12-06T13:45:39.303Z] Copying: 369/512 [MB] (183 MBps) [2024-12-06T13:45:39.872Z] Copying: 512/512 [MB] (average 184 MBps) 00:06:40.468 00:06:40.468 13:45:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:06:40.468 13:45:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:06:40.468 13:45:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:06:40.468 13:45:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:06:40.468 13:45:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:06:40.468 13:45:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:06:40.468 13:45:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:40.468 13:45:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:40.468 [2024-12-06 13:45:39.812092] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:06:40.468 [2024-12-06 13:45:39.812218] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61282 ] 00:06:40.468 { 00:06:40.468 "subsystems": [ 00:06:40.468 { 00:06:40.468 "subsystem": "bdev", 00:06:40.468 "config": [ 00:06:40.468 { 00:06:40.468 "params": { 00:06:40.468 "block_size": 512, 00:06:40.468 "num_blocks": 1048576, 00:06:40.468 "name": "malloc0" 00:06:40.468 }, 00:06:40.468 "method": "bdev_malloc_create" 00:06:40.468 }, 00:06:40.468 { 00:06:40.468 "params": { 00:06:40.468 "filename": "/dev/zram1", 00:06:40.468 "name": "uring0" 00:06:40.468 }, 00:06:40.468 "method": "bdev_uring_create" 00:06:40.468 }, 00:06:40.468 { 00:06:40.468 "params": { 00:06:40.468 "name": "uring0" 00:06:40.468 }, 00:06:40.469 "method": "bdev_uring_delete" 00:06:40.469 }, 00:06:40.469 { 00:06:40.469 "method": "bdev_wait_for_examine" 00:06:40.469 } 00:06:40.469 ] 00:06:40.469 } 00:06:40.469 ] 00:06:40.469 } 00:06:40.758 [2024-12-06 13:45:39.961464] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.758 [2024-12-06 13:45:40.007237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.758 [2024-12-06 13:45:40.074868] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:41.041  [2024-12-06T13:45:41.029Z] Copying: 0/0 [B] (average 0 Bps) 00:06:41.625 00:06:41.625 13:45:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:06:41.625 13:45:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:06:41.625 13:45:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:06:41.625 13:45:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:41.625 13:45:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:41.625 13:45:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@652 -- # local es=0 00:06:41.625 13:45:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:06:41.626 13:45:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:41.626 13:45:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:41.626 13:45:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:41.626 13:45:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:41.626 13:45:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:41.626 13:45:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:41.626 13:45:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:41.626 13:45:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:41.626 13:45:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:06:41.626 [2024-12-06 13:45:40.918049] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:06:41.626 [2024-12-06 13:45:40.918167] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61316 ] 00:06:41.626 { 00:06:41.626 "subsystems": [ 00:06:41.626 { 00:06:41.626 "subsystem": "bdev", 00:06:41.626 "config": [ 00:06:41.626 { 00:06:41.626 "params": { 00:06:41.626 "block_size": 512, 00:06:41.626 "num_blocks": 1048576, 00:06:41.626 "name": "malloc0" 00:06:41.626 }, 00:06:41.626 "method": "bdev_malloc_create" 00:06:41.626 }, 00:06:41.626 { 00:06:41.626 "params": { 00:06:41.626 "filename": "/dev/zram1", 00:06:41.626 "name": "uring0" 00:06:41.626 }, 00:06:41.626 "method": "bdev_uring_create" 00:06:41.626 }, 00:06:41.626 { 00:06:41.626 "params": { 00:06:41.626 "name": "uring0" 00:06:41.626 }, 00:06:41.626 "method": "bdev_uring_delete" 00:06:41.626 }, 00:06:41.626 { 00:06:41.626 "method": "bdev_wait_for_examine" 00:06:41.626 } 00:06:41.626 ] 00:06:41.626 } 00:06:41.626 ] 00:06:41.626 } 00:06:41.884 [2024-12-06 13:45:41.060673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.884 [2024-12-06 13:45:41.111509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.884 [2024-12-06 13:45:41.180670] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:42.143 [2024-12-06 13:45:41.432614] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:06:42.143 [2024-12-06 13:45:41.432671] spdk_dd.c: 931:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:06:42.143 [2024-12-06 13:45:41.432688] spdk_dd.c:1088:dd_run: *ERROR*: uring0: No such device 00:06:42.143 [2024-12-06 13:45:41.432697] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:42.724 [2024-12-06 13:45:41.867852] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:06:42.724 13:45:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@655 -- # es=237 00:06:42.724 13:45:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:42.724 13:45:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@664 -- # es=109 00:06:42.724 13:45:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@665 -- # case "$es" in 00:06:42.724 13:45:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@672 -- # es=1 00:06:42.724 13:45:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:42.724 13:45:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:06:42.724 13:45:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # local id=1 00:06:42.724 13:45:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@170 -- # [[ -e /sys/block/zram1 ]] 00:06:42.724 13:45:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # echo 1 00:06:42.725 13:45:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@173 -- # echo 1 00:06:42.725 13:45:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:06:42.981 00:06:42.981 real 0m15.572s 00:06:42.981 user 0m10.374s 00:06:42.981 ************************************ 00:06:42.981 END TEST dd_uring_copy 00:06:42.981 ************************************ 00:06:42.981 sys 0m13.322s 00:06:42.981 13:45:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:42.981 13:45:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:42.981 ************************************ 00:06:42.981 END TEST spdk_dd_uring 00:06:42.981 ************************************ 00:06:42.981 00:06:42.981 real 0m15.830s 00:06:42.981 user 0m10.523s 00:06:42.981 sys 0m13.431s 00:06:42.981 13:45:42 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:42.981 13:45:42 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:06:42.981 13:45:42 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:06:42.981 13:45:42 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:42.981 13:45:42 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:42.981 13:45:42 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:42.981 ************************************ 00:06:42.981 START TEST spdk_dd_sparse 00:06:42.981 ************************************ 00:06:42.981 13:45:42 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:06:43.239 * Looking for test storage... 00:06:43.239 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:43.239 13:45:42 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:43.239 13:45:42 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1711 -- # lcov --version 00:06:43.239 13:45:42 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:43.239 13:45:42 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:43.239 13:45:42 spdk_dd.spdk_dd_sparse -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:43.239 13:45:42 spdk_dd.spdk_dd_sparse -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:43.239 13:45:42 spdk_dd.spdk_dd_sparse -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:43.239 13:45:42 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # IFS=.-: 00:06:43.239 13:45:42 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # read -ra ver1 00:06:43.239 13:45:42 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # IFS=.-: 00:06:43.239 13:45:42 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # read -ra ver2 00:06:43.239 13:45:42 spdk_dd.spdk_dd_sparse -- scripts/common.sh@338 -- # local 'op=<' 00:06:43.239 13:45:42 spdk_dd.spdk_dd_sparse -- scripts/common.sh@340 -- # ver1_l=2 00:06:43.239 13:45:42 spdk_dd.spdk_dd_sparse -- scripts/common.sh@341 -- # ver2_l=1 00:06:43.239 13:45:42 spdk_dd.spdk_dd_sparse -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:43.239 13:45:42 spdk_dd.spdk_dd_sparse -- scripts/common.sh@344 -- # case "$op" in 00:06:43.239 13:45:42 spdk_dd.spdk_dd_sparse -- scripts/common.sh@345 -- # : 1 00:06:43.239 13:45:42 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:43.239 13:45:42 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:43.239 13:45:42 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # decimal 1 00:06:43.239 13:45:42 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=1 00:06:43.239 13:45:42 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:43.239 13:45:42 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 1 00:06:43.239 13:45:42 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # ver1[v]=1 00:06:43.239 13:45:42 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # decimal 2 00:06:43.239 13:45:42 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=2 00:06:43.239 13:45:42 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:43.239 13:45:42 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 2 00:06:43.239 13:45:42 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # ver2[v]=2 00:06:43.239 13:45:42 spdk_dd.spdk_dd_sparse -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:43.239 13:45:42 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:43.239 13:45:42 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # return 0 00:06:43.239 13:45:42 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:43.239 13:45:42 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:43.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.239 --rc genhtml_branch_coverage=1 00:06:43.239 --rc genhtml_function_coverage=1 00:06:43.239 --rc genhtml_legend=1 00:06:43.239 --rc geninfo_all_blocks=1 00:06:43.239 --rc geninfo_unexecuted_blocks=1 00:06:43.239 00:06:43.239 ' 00:06:43.239 13:45:42 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:43.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.239 --rc genhtml_branch_coverage=1 00:06:43.239 --rc genhtml_function_coverage=1 00:06:43.239 --rc genhtml_legend=1 00:06:43.239 --rc geninfo_all_blocks=1 00:06:43.239 --rc geninfo_unexecuted_blocks=1 00:06:43.239 00:06:43.239 ' 00:06:43.239 13:45:42 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:43.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.239 --rc genhtml_branch_coverage=1 00:06:43.239 --rc genhtml_function_coverage=1 00:06:43.239 --rc genhtml_legend=1 00:06:43.239 --rc geninfo_all_blocks=1 00:06:43.239 --rc geninfo_unexecuted_blocks=1 00:06:43.239 00:06:43.239 ' 00:06:43.239 13:45:42 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:43.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.239 --rc genhtml_branch_coverage=1 00:06:43.239 --rc genhtml_function_coverage=1 00:06:43.239 --rc genhtml_legend=1 00:06:43.239 --rc geninfo_all_blocks=1 00:06:43.239 --rc geninfo_unexecuted_blocks=1 00:06:43.239 00:06:43.239 ' 00:06:43.239 13:45:42 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:43.239 13:45:42 spdk_dd.spdk_dd_sparse -- scripts/common.sh@15 -- # shopt -s extglob 00:06:43.239 13:45:42 spdk_dd.spdk_dd_sparse -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:43.239 13:45:42 spdk_dd.spdk_dd_sparse -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:43.239 13:45:42 spdk_dd.spdk_dd_sparse -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:43.239 13:45:42 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.240 13:45:42 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.240 13:45:42 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.240 13:45:42 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:06:43.240 13:45:42 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.240 13:45:42 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:06:43.240 13:45:42 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:06:43.240 13:45:42 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:06:43.240 13:45:42 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:06:43.240 13:45:42 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:06:43.240 13:45:42 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:06:43.240 13:45:42 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:06:43.240 13:45:42 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:06:43.240 13:45:42 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:06:43.240 13:45:42 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:06:43.240 13:45:42 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:06:43.240 1+0 records in 00:06:43.240 1+0 records out 00:06:43.240 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00824239 s, 509 MB/s 00:06:43.240 13:45:42 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:06:43.240 1+0 records in 00:06:43.240 1+0 records out 00:06:43.240 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00775477 s, 541 MB/s 00:06:43.240 13:45:42 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:06:43.240 1+0 records in 00:06:43.240 1+0 records out 00:06:43.240 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00756512 s, 554 MB/s 00:06:43.240 13:45:42 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:06:43.240 13:45:42 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:43.240 13:45:42 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:43.240 13:45:42 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:06:43.240 ************************************ 00:06:43.240 START TEST dd_sparse_file_to_file 00:06:43.240 ************************************ 00:06:43.240 13:45:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1129 -- # file_to_file 00:06:43.240 13:45:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:06:43.240 13:45:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:06:43.240 13:45:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:06:43.240 13:45:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:06:43.240 13:45:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:06:43.240 13:45:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:06:43.240 13:45:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:06:43.240 13:45:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:06:43.240 13:45:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:06:43.240 13:45:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:06:43.498 [2024-12-06 13:45:42.669357] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:06:43.498 [2024-12-06 13:45:42.669613] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61409 ] 00:06:43.498 { 00:06:43.498 "subsystems": [ 00:06:43.498 { 00:06:43.498 "subsystem": "bdev", 00:06:43.498 "config": [ 00:06:43.498 { 00:06:43.498 "params": { 00:06:43.498 "block_size": 4096, 00:06:43.498 "filename": "dd_sparse_aio_disk", 00:06:43.498 "name": "dd_aio" 00:06:43.498 }, 00:06:43.498 "method": "bdev_aio_create" 00:06:43.498 }, 00:06:43.498 { 00:06:43.498 "params": { 00:06:43.498 "lvs_name": "dd_lvstore", 00:06:43.498 "bdev_name": "dd_aio" 00:06:43.498 }, 00:06:43.498 "method": "bdev_lvol_create_lvstore" 00:06:43.498 }, 00:06:43.498 { 00:06:43.498 "method": "bdev_wait_for_examine" 00:06:43.498 } 00:06:43.498 ] 00:06:43.498 } 00:06:43.498 ] 00:06:43.498 } 00:06:43.498 [2024-12-06 13:45:42.813211] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.498 [2024-12-06 13:45:42.860700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.757 [2024-12-06 13:45:42.931583] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:43.757  [2024-12-06T13:45:43.420Z] Copying: 12/36 [MB] (average 857 MBps) 00:06:44.016 00:06:44.016 13:45:43 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:06:44.016 13:45:43 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:06:44.016 13:45:43 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:06:44.016 13:45:43 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:06:44.016 13:45:43 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:06:44.016 13:45:43 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:06:44.016 13:45:43 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:06:44.016 13:45:43 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:06:44.016 13:45:43 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:06:44.016 13:45:43 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:06:44.016 00:06:44.016 real 0m0.711s 00:06:44.016 user 0m0.420s 00:06:44.016 sys 0m0.430s 00:06:44.016 13:45:43 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:44.016 13:45:43 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:06:44.016 ************************************ 00:06:44.016 END TEST dd_sparse_file_to_file 00:06:44.016 ************************************ 00:06:44.016 13:45:43 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:06:44.016 13:45:43 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:44.016 13:45:43 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:44.016 13:45:43 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:06:44.016 ************************************ 00:06:44.016 START TEST dd_sparse_file_to_bdev 00:06:44.016 ************************************ 00:06:44.016 13:45:43 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1129 -- # file_to_bdev 00:06:44.016 13:45:43 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:06:44.016 13:45:43 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:06:44.016 13:45:43 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:06:44.016 13:45:43 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:06:44.016 13:45:43 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:06:44.016 13:45:43 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:06:44.016 13:45:43 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:06:44.016 13:45:43 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:44.276 [2024-12-06 13:45:43.430284] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:06:44.276 [2024-12-06 13:45:43.430358] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61457 ] 00:06:44.276 { 00:06:44.276 "subsystems": [ 00:06:44.276 { 00:06:44.276 "subsystem": "bdev", 00:06:44.276 "config": [ 00:06:44.276 { 00:06:44.276 "params": { 00:06:44.276 "block_size": 4096, 00:06:44.276 "filename": "dd_sparse_aio_disk", 00:06:44.276 "name": "dd_aio" 00:06:44.276 }, 00:06:44.276 "method": "bdev_aio_create" 00:06:44.276 }, 00:06:44.276 { 00:06:44.276 "params": { 00:06:44.276 "lvs_name": "dd_lvstore", 00:06:44.276 "lvol_name": "dd_lvol", 00:06:44.276 "size_in_mib": 36, 00:06:44.276 "thin_provision": true 00:06:44.276 }, 00:06:44.276 "method": "bdev_lvol_create" 00:06:44.276 }, 00:06:44.276 { 00:06:44.276 "method": "bdev_wait_for_examine" 00:06:44.276 } 00:06:44.276 ] 00:06:44.276 } 00:06:44.276 ] 00:06:44.276 } 00:06:44.276 [2024-12-06 13:45:43.568387] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.276 [2024-12-06 13:45:43.616569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.534 [2024-12-06 13:45:43.685279] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:44.534  [2024-12-06T13:45:44.197Z] Copying: 12/36 [MB] (average 413 MBps) 00:06:44.793 00:06:44.793 ************************************ 00:06:44.793 END TEST dd_sparse_file_to_bdev 00:06:44.793 ************************************ 00:06:44.793 00:06:44.793 real 0m0.672s 00:06:44.793 user 0m0.410s 00:06:44.793 sys 0m0.420s 00:06:44.793 13:45:44 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:44.793 13:45:44 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:44.793 13:45:44 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:06:44.793 13:45:44 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:44.793 13:45:44 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:44.793 13:45:44 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:06:44.793 ************************************ 00:06:44.793 START TEST dd_sparse_bdev_to_file 00:06:44.793 ************************************ 00:06:44.793 13:45:44 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1129 -- # bdev_to_file 00:06:44.793 13:45:44 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:06:44.793 13:45:44 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:06:44.793 13:45:44 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:06:44.793 13:45:44 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:06:44.793 13:45:44 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:06:44.793 13:45:44 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:06:44.793 13:45:44 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:06:44.793 13:45:44 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:06:44.793 [2024-12-06 13:45:44.151488] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:06:44.793 [2024-12-06 13:45:44.151558] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61495 ] 00:06:44.793 { 00:06:44.793 "subsystems": [ 00:06:44.793 { 00:06:44.793 "subsystem": "bdev", 00:06:44.793 "config": [ 00:06:44.793 { 00:06:44.793 "params": { 00:06:44.793 "block_size": 4096, 00:06:44.793 "filename": "dd_sparse_aio_disk", 00:06:44.793 "name": "dd_aio" 00:06:44.793 }, 00:06:44.793 "method": "bdev_aio_create" 00:06:44.793 }, 00:06:44.793 { 00:06:44.793 "method": "bdev_wait_for_examine" 00:06:44.793 } 00:06:44.793 ] 00:06:44.793 } 00:06:44.793 ] 00:06:44.793 } 00:06:45.052 [2024-12-06 13:45:44.291373] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.052 [2024-12-06 13:45:44.339509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.052 [2024-12-06 13:45:44.409470] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:45.311  [2024-12-06T13:45:44.974Z] Copying: 12/36 [MB] (average 857 MBps) 00:06:45.570 00:06:45.570 13:45:44 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:06:45.570 13:45:44 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:06:45.570 13:45:44 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:06:45.570 13:45:44 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:06:45.570 13:45:44 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:06:45.570 13:45:44 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:06:45.570 13:45:44 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:06:45.570 13:45:44 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:06:45.570 13:45:44 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:06:45.570 13:45:44 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:06:45.570 00:06:45.570 real 0m0.679s 00:06:45.570 user 0m0.416s 00:06:45.570 sys 0m0.416s 00:06:45.570 13:45:44 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:45.570 ************************************ 00:06:45.570 END TEST dd_sparse_bdev_to_file 00:06:45.570 ************************************ 00:06:45.570 13:45:44 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:06:45.570 13:45:44 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:06:45.570 13:45:44 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:06:45.570 13:45:44 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:06:45.570 13:45:44 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:06:45.570 13:45:44 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:06:45.570 ************************************ 00:06:45.570 END TEST spdk_dd_sparse 00:06:45.570 ************************************ 00:06:45.570 00:06:45.570 real 0m2.498s 00:06:45.570 user 0m1.434s 00:06:45.570 sys 0m1.498s 00:06:45.570 13:45:44 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:45.570 13:45:44 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:06:45.570 13:45:44 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:06:45.570 13:45:44 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:45.570 13:45:44 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:45.570 13:45:44 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:45.570 ************************************ 00:06:45.570 START TEST spdk_dd_negative 00:06:45.570 ************************************ 00:06:45.570 13:45:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:06:45.834 * Looking for test storage... 00:06:45.834 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:45.834 13:45:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:45.834 13:45:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1711 -- # lcov --version 00:06:45.834 13:45:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:45.834 13:45:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:45.834 13:45:45 spdk_dd.spdk_dd_negative -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:45.834 13:45:45 spdk_dd.spdk_dd_negative -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:45.834 13:45:45 spdk_dd.spdk_dd_negative -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:45.834 13:45:45 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # IFS=.-: 00:06:45.834 13:45:45 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # read -ra ver1 00:06:45.834 13:45:45 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # IFS=.-: 00:06:45.834 13:45:45 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # read -ra ver2 00:06:45.834 13:45:45 spdk_dd.spdk_dd_negative -- scripts/common.sh@338 -- # local 'op=<' 00:06:45.834 13:45:45 spdk_dd.spdk_dd_negative -- scripts/common.sh@340 -- # ver1_l=2 00:06:45.834 13:45:45 spdk_dd.spdk_dd_negative -- scripts/common.sh@341 -- # ver2_l=1 00:06:45.835 13:45:45 spdk_dd.spdk_dd_negative -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:45.835 13:45:45 spdk_dd.spdk_dd_negative -- scripts/common.sh@344 -- # case "$op" in 00:06:45.835 13:45:45 spdk_dd.spdk_dd_negative -- scripts/common.sh@345 -- # : 1 00:06:45.835 13:45:45 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:45.835 13:45:45 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:45.835 13:45:45 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # decimal 1 00:06:45.835 13:45:45 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=1 00:06:45.835 13:45:45 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:45.835 13:45:45 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 1 00:06:45.835 13:45:45 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # ver1[v]=1 00:06:45.835 13:45:45 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # decimal 2 00:06:45.835 13:45:45 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=2 00:06:45.835 13:45:45 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:45.835 13:45:45 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 2 00:06:45.835 13:45:45 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # ver2[v]=2 00:06:45.835 13:45:45 spdk_dd.spdk_dd_negative -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:45.835 13:45:45 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:45.835 13:45:45 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # return 0 00:06:45.835 13:45:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:45.835 13:45:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:45.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.835 --rc genhtml_branch_coverage=1 00:06:45.835 --rc genhtml_function_coverage=1 00:06:45.835 --rc genhtml_legend=1 00:06:45.835 --rc geninfo_all_blocks=1 00:06:45.835 --rc geninfo_unexecuted_blocks=1 00:06:45.835 00:06:45.835 ' 00:06:45.835 13:45:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:45.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.835 --rc genhtml_branch_coverage=1 00:06:45.835 --rc genhtml_function_coverage=1 00:06:45.835 --rc genhtml_legend=1 00:06:45.835 --rc geninfo_all_blocks=1 00:06:45.835 --rc geninfo_unexecuted_blocks=1 00:06:45.835 00:06:45.835 ' 00:06:45.835 13:45:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:45.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.835 --rc genhtml_branch_coverage=1 00:06:45.835 --rc genhtml_function_coverage=1 00:06:45.836 --rc genhtml_legend=1 00:06:45.836 --rc geninfo_all_blocks=1 00:06:45.836 --rc geninfo_unexecuted_blocks=1 00:06:45.836 00:06:45.836 ' 00:06:45.836 13:45:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:45.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.836 --rc genhtml_branch_coverage=1 00:06:45.836 --rc genhtml_function_coverage=1 00:06:45.836 --rc genhtml_legend=1 00:06:45.836 --rc geninfo_all_blocks=1 00:06:45.836 --rc geninfo_unexecuted_blocks=1 00:06:45.836 00:06:45.836 ' 00:06:45.836 13:45:45 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:45.836 13:45:45 spdk_dd.spdk_dd_negative -- scripts/common.sh@15 -- # shopt -s extglob 00:06:45.836 13:45:45 spdk_dd.spdk_dd_negative -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:45.836 13:45:45 spdk_dd.spdk_dd_negative -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:45.836 13:45:45 spdk_dd.spdk_dd_negative -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:45.836 13:45:45 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.837 13:45:45 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.837 13:45:45 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.837 13:45:45 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:06:45.837 13:45:45 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.837 13:45:45 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@210 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:45.837 13:45:45 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@211 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:45.837 13:45:45 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@213 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:45.837 13:45:45 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@214 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:45.837 13:45:45 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@216 -- # run_test dd_invalid_arguments invalid_arguments 00:06:45.837 13:45:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:45.837 13:45:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:45.837 13:45:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:45.837 ************************************ 00:06:45.837 START TEST dd_invalid_arguments 00:06:45.837 ************************************ 00:06:45.837 13:45:45 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1129 -- # invalid_arguments 00:06:45.837 13:45:45 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:06:45.837 13:45:45 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # local es=0 00:06:45.837 13:45:45 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:06:45.837 13:45:45 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:45.837 13:45:45 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:45.837 13:45:45 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:45.837 13:45:45 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:45.837 13:45:45 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:45.837 13:45:45 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:45.837 13:45:45 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:45.837 13:45:45 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:45.837 13:45:45 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:06:45.837 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:06:45.837 00:06:45.837 CPU options: 00:06:45.837 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:06:45.837 (like [0,1,10]) 00:06:45.837 --lcores lcore to CPU mapping list. The list is in the format: 00:06:45.837 [<,lcores[@CPUs]>...] 00:06:45.837 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:06:45.837 Within the group, '-' is used for range separator, 00:06:45.837 ',' is used for single number separator. 00:06:45.837 '( )' can be omitted for single element group, 00:06:45.837 '@' can be omitted if cpus and lcores have the same value 00:06:45.837 --disable-cpumask-locks Disable CPU core lock files. 00:06:45.837 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:06:45.837 pollers in the app support interrupt mode) 00:06:45.837 -p, --main-core main (primary) core for DPDK 00:06:45.837 00:06:45.837 Configuration options: 00:06:45.837 -c, --config, --json JSON config file 00:06:45.837 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:06:45.837 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:06:45.837 --wait-for-rpc wait for RPCs to initialize subsystems 00:06:45.837 --rpcs-allowed comma-separated list of permitted RPCS 00:06:45.837 --json-ignore-init-errors don't exit on invalid config entry 00:06:45.837 00:06:45.837 Memory options: 00:06:45.837 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:06:45.837 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:06:45.837 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:06:45.837 -R, --huge-unlink unlink huge files after initialization 00:06:45.837 -n, --mem-channels number of memory channels used for DPDK 00:06:45.837 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:06:45.837 --msg-mempool-size global message memory pool size in count (default: 262143) 00:06:45.837 --no-huge run without using hugepages 00:06:45.837 --enforce-numa enforce NUMA allocations from the specified NUMA node 00:06:45.837 -i, --shm-id shared memory ID (optional) 00:06:45.837 -g, --single-file-segments force creating just one hugetlbfs file 00:06:45.837 00:06:45.837 PCI options: 00:06:45.837 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:06:45.837 -B, --pci-blocked pci addr to block (can be used more than once) 00:06:45.837 -u, --no-pci disable PCI access 00:06:45.837 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:06:45.837 00:06:45.837 Log options: 00:06:45.837 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:06:45.837 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:06:45.837 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:06:45.837 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:06:45.837 blobfs_rw, fsdev, fsdev_aio, ftl_core, ftl_init, gpt_parse, idxd, ioat, 00:06:45.837 iscsi_init, json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, 00:06:45.837 nvme, nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, 00:06:45.837 sock_posix, spdk_aio_mgr_io, thread, trace, uring, vbdev_delay, 00:06:45.837 vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, 00:06:45.837 vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, 00:06:45.837 virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:06:45.837 --silence-noticelog disable notice level logging to stderr 00:06:45.837 00:06:45.837 Trace options: 00:06:45.837 --num-trace-entries number of trace entries for each core, must be power of 2, 00:06:45.837 setting 0 to disable trace (default 32768) 00:06:45.837 Tracepoints vary in size and can use more than one trace entry. 00:06:45.837 -e, --tpoint-group [:] 00:06:45.837 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:06:45.837 [2024-12-06 13:45:45.187837] spdk_dd.c:1478:main: *ERROR*: Invalid arguments 00:06:45.837 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:06:45.837 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, blob, 00:06:45.837 bdev_raid, scheduler, all). 00:06:45.837 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:06:45.837 a tracepoint group. First tpoint inside a group can be enabled by 00:06:45.837 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:06:45.837 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:06:45.837 in /include/spdk_internal/trace_defs.h 00:06:45.837 00:06:45.837 Other options: 00:06:45.837 -h, --help show this usage 00:06:45.837 -v, --version print SPDK version 00:06:45.837 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:06:45.837 --env-context Opaque context for use of the env implementation 00:06:45.837 00:06:45.837 Application specific: 00:06:45.837 [--------- DD Options ---------] 00:06:45.837 --if Input file. Must specify either --if or --ib. 00:06:45.837 --ib Input bdev. Must specifier either --if or --ib 00:06:45.837 --of Output file. Must specify either --of or --ob. 00:06:45.837 --ob Output bdev. Must specify either --of or --ob. 00:06:45.837 --iflag Input file flags. 00:06:45.837 --oflag Output file flags. 00:06:45.837 --bs I/O unit size (default: 4096) 00:06:45.837 --qd Queue depth (default: 2) 00:06:45.837 --count I/O unit count. The number of I/O units to copy. (default: all) 00:06:45.837 --skip Skip this many I/O units at start of input. (default: 0) 00:06:45.837 --seek Skip this many I/O units at start of output. (default: 0) 00:06:45.837 --aio Force usage of AIO. (by default io_uring is used if available) 00:06:45.837 --sparse Enable hole skipping in input target 00:06:45.837 Available iflag and oflag values: 00:06:45.837 append - append mode 00:06:45.837 direct - use direct I/O for data 00:06:45.837 directory - fail unless a directory 00:06:45.837 dsync - use synchronized I/O for data 00:06:45.837 noatime - do not update access time 00:06:45.837 noctty - do not assign controlling terminal from file 00:06:45.837 nofollow - do not follow symlinks 00:06:45.837 nonblock - use non-blocking I/O 00:06:45.837 sync - use synchronized I/O for data and metadata 00:06:45.837 ************************************ 00:06:45.837 END TEST dd_invalid_arguments 00:06:45.837 ************************************ 00:06:45.837 13:45:45 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # es=2 00:06:45.837 13:45:45 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:45.837 13:45:45 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:45.837 13:45:45 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:45.837 00:06:45.837 real 0m0.082s 00:06:45.837 user 0m0.053s 00:06:45.837 sys 0m0.026s 00:06:45.837 13:45:45 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:45.837 13:45:45 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:06:46.096 13:45:45 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@217 -- # run_test dd_double_input double_input 00:06:46.096 13:45:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:46.096 13:45:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:46.096 13:45:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:46.096 ************************************ 00:06:46.096 START TEST dd_double_input 00:06:46.096 ************************************ 00:06:46.096 13:45:45 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1129 -- # double_input 00:06:46.096 13:45:45 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:06:46.096 13:45:45 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # local es=0 00:06:46.096 13:45:45 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:06:46.096 13:45:45 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:46.096 13:45:45 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:46.096 13:45:45 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:46.096 13:45:45 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:46.096 13:45:45 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:46.096 13:45:45 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:46.096 13:45:45 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:46.096 13:45:45 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:46.096 13:45:45 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:06:46.096 [2024-12-06 13:45:45.323811] spdk_dd.c:1485:main: *ERROR*: You may specify either --if or --ib, but not both. 00:06:46.096 13:45:45 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # es=22 00:06:46.096 13:45:45 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:46.096 13:45:45 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:46.096 13:45:45 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:46.096 00:06:46.096 real 0m0.077s 00:06:46.096 user 0m0.053s 00:06:46.096 sys 0m0.023s 00:06:46.096 13:45:45 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:46.096 ************************************ 00:06:46.096 END TEST dd_double_input 00:06:46.096 ************************************ 00:06:46.096 13:45:45 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:06:46.096 13:45:45 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@218 -- # run_test dd_double_output double_output 00:06:46.096 13:45:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:46.096 13:45:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:46.096 13:45:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:46.096 ************************************ 00:06:46.096 START TEST dd_double_output 00:06:46.096 ************************************ 00:06:46.096 13:45:45 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1129 -- # double_output 00:06:46.096 13:45:45 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:06:46.096 13:45:45 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # local es=0 00:06:46.096 13:45:45 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:06:46.096 13:45:45 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:46.096 13:45:45 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:46.096 13:45:45 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:46.096 13:45:45 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:46.096 13:45:45 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:46.096 13:45:45 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:46.096 13:45:45 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:46.096 13:45:45 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:46.096 13:45:45 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:06:46.096 [2024-12-06 13:45:45.454676] spdk_dd.c:1491:main: *ERROR*: You may specify either --of or --ob, but not both. 00:06:46.096 13:45:45 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # es=22 00:06:46.096 13:45:45 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:46.096 13:45:45 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:46.096 13:45:45 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:46.096 ************************************ 00:06:46.096 END TEST dd_double_output 00:06:46.096 ************************************ 00:06:46.096 00:06:46.096 real 0m0.079s 00:06:46.096 user 0m0.052s 00:06:46.096 sys 0m0.026s 00:06:46.096 13:45:45 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:46.096 13:45:45 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:06:46.354 13:45:45 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@219 -- # run_test dd_no_input no_input 00:06:46.354 13:45:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:46.354 13:45:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:46.354 13:45:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:46.354 ************************************ 00:06:46.354 START TEST dd_no_input 00:06:46.354 ************************************ 00:06:46.354 13:45:45 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1129 -- # no_input 00:06:46.354 13:45:45 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:06:46.354 13:45:45 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # local es=0 00:06:46.354 13:45:45 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:06:46.354 13:45:45 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:46.354 13:45:45 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:46.354 13:45:45 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:46.354 13:45:45 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:46.354 13:45:45 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:46.354 13:45:45 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:46.354 13:45:45 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:46.354 13:45:45 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:46.354 13:45:45 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:06:46.354 [2024-12-06 13:45:45.585372] spdk_dd.c:1497:main: *ERROR*: You must specify either --if or --ib 00:06:46.354 13:45:45 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # es=22 00:06:46.354 13:45:45 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:46.354 13:45:45 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:46.354 13:45:45 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:46.354 00:06:46.354 real 0m0.075s 00:06:46.354 user 0m0.050s 00:06:46.354 sys 0m0.025s 00:06:46.354 13:45:45 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:46.354 13:45:45 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:06:46.354 ************************************ 00:06:46.354 END TEST dd_no_input 00:06:46.354 ************************************ 00:06:46.354 13:45:45 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@220 -- # run_test dd_no_output no_output 00:06:46.354 13:45:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:46.354 13:45:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:46.354 13:45:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:46.354 ************************************ 00:06:46.354 START TEST dd_no_output 00:06:46.354 ************************************ 00:06:46.354 13:45:45 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1129 -- # no_output 00:06:46.354 13:45:45 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:46.354 13:45:45 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # local es=0 00:06:46.354 13:45:45 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:46.354 13:45:45 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:46.354 13:45:45 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:46.354 13:45:45 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:46.354 13:45:45 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:46.354 13:45:45 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:46.354 13:45:45 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:46.354 13:45:45 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:46.354 13:45:45 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:46.354 13:45:45 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:46.354 [2024-12-06 13:45:45.717706] spdk_dd.c:1503:main: *ERROR*: You must specify either --of or --ob 00:06:46.354 13:45:45 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # es=22 00:06:46.354 13:45:45 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:46.354 13:45:45 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:46.354 13:45:45 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:46.354 00:06:46.354 real 0m0.081s 00:06:46.354 user 0m0.048s 00:06:46.354 sys 0m0.032s 00:06:46.354 13:45:45 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:46.354 ************************************ 00:06:46.354 END TEST dd_no_output 00:06:46.354 ************************************ 00:06:46.354 13:45:45 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:06:46.613 13:45:45 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@221 -- # run_test dd_wrong_blocksize wrong_blocksize 00:06:46.613 13:45:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:46.613 13:45:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:46.613 13:45:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:46.613 ************************************ 00:06:46.613 START TEST dd_wrong_blocksize 00:06:46.613 ************************************ 00:06:46.613 13:45:45 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1129 -- # wrong_blocksize 00:06:46.613 13:45:45 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:06:46.613 13:45:45 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # local es=0 00:06:46.613 13:45:45 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:06:46.613 13:45:45 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:46.613 13:45:45 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:46.613 13:45:45 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:46.613 13:45:45 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:46.613 13:45:45 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:46.613 13:45:45 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:46.613 13:45:45 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:46.613 13:45:45 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:46.613 13:45:45 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:06:46.613 [2024-12-06 13:45:45.849360] spdk_dd.c:1509:main: *ERROR*: Invalid --bs value 00:06:46.613 13:45:45 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # es=22 00:06:46.613 13:45:45 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:46.613 13:45:45 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:46.613 13:45:45 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:46.613 00:06:46.613 real 0m0.076s 00:06:46.613 user 0m0.047s 00:06:46.613 sys 0m0.027s 00:06:46.613 13:45:45 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:46.613 ************************************ 00:06:46.613 END TEST dd_wrong_blocksize 00:06:46.613 ************************************ 00:06:46.613 13:45:45 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:06:46.613 13:45:45 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@222 -- # run_test dd_smaller_blocksize smaller_blocksize 00:06:46.613 13:45:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:46.613 13:45:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:46.613 13:45:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:46.613 ************************************ 00:06:46.613 START TEST dd_smaller_blocksize 00:06:46.613 ************************************ 00:06:46.613 13:45:45 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1129 -- # smaller_blocksize 00:06:46.613 13:45:45 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:06:46.613 13:45:45 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # local es=0 00:06:46.613 13:45:45 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:06:46.613 13:45:45 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:46.613 13:45:45 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:46.613 13:45:45 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:46.613 13:45:45 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:46.613 13:45:45 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:46.613 13:45:45 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:46.613 13:45:45 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:46.613 13:45:45 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:46.613 13:45:45 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:06:46.613 [2024-12-06 13:45:45.981601] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:06:46.613 [2024-12-06 13:45:45.981693] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61722 ] 00:06:46.871 [2024-12-06 13:45:46.132493] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.871 [2024-12-06 13:45:46.196800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.871 [2024-12-06 13:45:46.271473] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:47.436 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:06:47.695 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:06:47.695 [2024-12-06 13:45:46.898252] spdk_dd.c:1182:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:06:47.695 [2024-12-06 13:45:46.898325] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:47.695 [2024-12-06 13:45:47.065266] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:06:47.953 13:45:47 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # es=244 00:06:47.953 13:45:47 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:47.953 13:45:47 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@664 -- # es=116 00:06:47.953 13:45:47 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@665 -- # case "$es" in 00:06:47.953 13:45:47 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@672 -- # es=1 00:06:47.953 13:45:47 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:47.953 00:06:47.953 real 0m1.214s 00:06:47.954 user 0m0.451s 00:06:47.954 sys 0m0.655s 00:06:47.954 13:45:47 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:47.954 13:45:47 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:06:47.954 ************************************ 00:06:47.954 END TEST dd_smaller_blocksize 00:06:47.954 ************************************ 00:06:47.954 13:45:47 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@223 -- # run_test dd_invalid_count invalid_count 00:06:47.954 13:45:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:47.954 13:45:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:47.954 13:45:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:47.954 ************************************ 00:06:47.954 START TEST dd_invalid_count 00:06:47.954 ************************************ 00:06:47.954 13:45:47 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1129 -- # invalid_count 00:06:47.954 13:45:47 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:06:47.954 13:45:47 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # local es=0 00:06:47.954 13:45:47 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:06:47.954 13:45:47 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:47.954 13:45:47 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:47.954 13:45:47 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:47.954 13:45:47 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:47.954 13:45:47 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:47.954 13:45:47 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:47.954 13:45:47 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:47.954 13:45:47 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:47.954 13:45:47 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:06:47.954 [2024-12-06 13:45:47.238754] spdk_dd.c:1515:main: *ERROR*: Invalid --count value 00:06:47.954 13:45:47 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # es=22 00:06:47.954 13:45:47 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:47.954 13:45:47 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:47.954 13:45:47 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:47.954 00:06:47.954 real 0m0.061s 00:06:47.954 user 0m0.031s 00:06:47.954 sys 0m0.028s 00:06:47.954 ************************************ 00:06:47.954 END TEST dd_invalid_count 00:06:47.954 ************************************ 00:06:47.954 13:45:47 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:47.954 13:45:47 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:06:47.954 13:45:47 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@224 -- # run_test dd_invalid_oflag invalid_oflag 00:06:47.954 13:45:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:47.954 13:45:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:47.954 13:45:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:47.954 ************************************ 00:06:47.954 START TEST dd_invalid_oflag 00:06:47.954 ************************************ 00:06:47.954 13:45:47 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1129 -- # invalid_oflag 00:06:47.954 13:45:47 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:06:47.954 13:45:47 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # local es=0 00:06:47.954 13:45:47 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:06:47.954 13:45:47 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:47.954 13:45:47 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:47.954 13:45:47 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:47.954 13:45:47 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:47.954 13:45:47 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:47.954 13:45:47 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:47.954 13:45:47 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:47.954 13:45:47 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:47.954 13:45:47 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:06:48.213 [2024-12-06 13:45:47.368542] spdk_dd.c:1521:main: *ERROR*: --oflags may be used only with --of 00:06:48.213 13:45:47 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # es=22 00:06:48.213 13:45:47 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:48.213 13:45:47 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:48.213 13:45:47 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:48.213 00:06:48.213 real 0m0.080s 00:06:48.213 user 0m0.051s 00:06:48.213 sys 0m0.028s 00:06:48.213 ************************************ 00:06:48.213 END TEST dd_invalid_oflag 00:06:48.213 ************************************ 00:06:48.213 13:45:47 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:48.213 13:45:47 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:06:48.213 13:45:47 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@225 -- # run_test dd_invalid_iflag invalid_iflag 00:06:48.213 13:45:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:48.213 13:45:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:48.213 13:45:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:48.213 ************************************ 00:06:48.213 START TEST dd_invalid_iflag 00:06:48.213 ************************************ 00:06:48.213 13:45:47 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1129 -- # invalid_iflag 00:06:48.213 13:45:47 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:06:48.213 13:45:47 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # local es=0 00:06:48.213 13:45:47 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:06:48.213 13:45:47 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:48.213 13:45:47 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:48.213 13:45:47 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:48.213 13:45:47 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:48.213 13:45:47 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:48.213 13:45:47 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:48.213 13:45:47 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:48.213 13:45:47 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:48.213 13:45:47 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:06:48.213 [2024-12-06 13:45:47.505127] spdk_dd.c:1527:main: *ERROR*: --iflags may be used only with --if 00:06:48.213 13:45:47 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # es=22 00:06:48.213 13:45:47 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:48.213 13:45:47 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:48.213 ************************************ 00:06:48.213 END TEST dd_invalid_iflag 00:06:48.213 ************************************ 00:06:48.213 13:45:47 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:48.213 00:06:48.213 real 0m0.079s 00:06:48.213 user 0m0.048s 00:06:48.213 sys 0m0.029s 00:06:48.213 13:45:47 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:48.213 13:45:47 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:06:48.213 13:45:47 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@226 -- # run_test dd_unknown_flag unknown_flag 00:06:48.213 13:45:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:48.213 13:45:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:48.213 13:45:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:48.213 ************************************ 00:06:48.213 START TEST dd_unknown_flag 00:06:48.213 ************************************ 00:06:48.213 13:45:47 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1129 -- # unknown_flag 00:06:48.213 13:45:47 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:06:48.213 13:45:47 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # local es=0 00:06:48.213 13:45:47 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:06:48.213 13:45:47 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:48.213 13:45:47 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:48.213 13:45:47 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:48.213 13:45:47 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:48.213 13:45:47 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:48.213 13:45:47 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:48.213 13:45:47 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:48.213 13:45:47 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:48.214 13:45:47 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:06:48.473 [2024-12-06 13:45:47.638160] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:06:48.473 [2024-12-06 13:45:47.638254] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61819 ] 00:06:48.473 [2024-12-06 13:45:47.781298] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.473 [2024-12-06 13:45:47.829247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.731 [2024-12-06 13:45:47.899555] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:48.731 [2024-12-06 13:45:47.944064] spdk_dd.c: 984:parse_flags: *ERROR*: Unknown file flag: -1 00:06:48.731 [2024-12-06 13:45:47.944139] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:48.731 [2024-12-06 13:45:47.944208] spdk_dd.c: 984:parse_flags: *ERROR*: Unknown file flag: -1 00:06:48.731 [2024-12-06 13:45:47.944220] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:48.732 [2024-12-06 13:45:47.944458] spdk_dd.c:1216:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:06:48.732 [2024-12-06 13:45:47.944474] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:48.732 [2024-12-06 13:45:47.944531] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:06:48.732 [2024-12-06 13:45:47.944540] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:06:48.732 [2024-12-06 13:45:48.099230] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:06:48.991 13:45:48 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # es=234 00:06:48.991 13:45:48 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:48.991 13:45:48 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@664 -- # es=106 00:06:48.991 13:45:48 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@665 -- # case "$es" in 00:06:48.991 13:45:48 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@672 -- # es=1 00:06:48.991 ************************************ 00:06:48.991 END TEST dd_unknown_flag 00:06:48.991 13:45:48 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:48.991 00:06:48.991 real 0m0.591s 00:06:48.991 user 0m0.317s 00:06:48.991 sys 0m0.178s 00:06:48.991 13:45:48 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:48.991 13:45:48 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:06:48.991 ************************************ 00:06:48.991 13:45:48 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@227 -- # run_test dd_invalid_json invalid_json 00:06:48.991 13:45:48 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:48.991 13:45:48 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:48.991 13:45:48 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:48.991 ************************************ 00:06:48.991 START TEST dd_invalid_json 00:06:48.991 ************************************ 00:06:48.991 13:45:48 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1129 -- # invalid_json 00:06:48.991 13:45:48 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:06:48.991 13:45:48 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # local es=0 00:06:48.991 13:45:48 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:06:48.991 13:45:48 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:48.991 13:45:48 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # : 00:06:48.991 13:45:48 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:48.991 13:45:48 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:48.991 13:45:48 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:48.991 13:45:48 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:48.991 13:45:48 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:48.991 13:45:48 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:48.991 13:45:48 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:48.991 13:45:48 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:06:48.991 [2024-12-06 13:45:48.288055] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:06:48.991 [2024-12-06 13:45:48.288166] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61853 ] 00:06:49.251 [2024-12-06 13:45:48.432038] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.251 [2024-12-06 13:45:48.478437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.251 [2024-12-06 13:45:48.478510] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:06:49.251 [2024-12-06 13:45:48.478524] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:49.251 [2024-12-06 13:45:48.478533] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:49.251 [2024-12-06 13:45:48.478574] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:06:49.251 13:45:48 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # es=234 00:06:49.251 13:45:48 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:49.251 13:45:48 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@664 -- # es=106 00:06:49.251 ************************************ 00:06:49.251 END TEST dd_invalid_json 00:06:49.251 ************************************ 00:06:49.251 13:45:48 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@665 -- # case "$es" in 00:06:49.251 13:45:48 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@672 -- # es=1 00:06:49.251 13:45:48 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:49.251 00:06:49.251 real 0m0.308s 00:06:49.251 user 0m0.141s 00:06:49.251 sys 0m0.066s 00:06:49.251 13:45:48 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:49.251 13:45:48 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:06:49.251 13:45:48 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@228 -- # run_test dd_invalid_seek invalid_seek 00:06:49.251 13:45:48 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:49.251 13:45:48 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:49.251 13:45:48 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:49.251 ************************************ 00:06:49.251 START TEST dd_invalid_seek 00:06:49.251 ************************************ 00:06:49.251 13:45:48 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1129 -- # invalid_seek 00:06:49.251 13:45:48 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@102 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:06:49.251 13:45:48 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:06:49.251 13:45:48 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # local -A method_bdev_malloc_create_0 00:06:49.251 13:45:48 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@108 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:06:49.251 13:45:48 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:06:49.251 13:45:48 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # local -A method_bdev_malloc_create_1 00:06:49.251 13:45:48 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:06:49.251 13:45:48 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@652 -- # local es=0 00:06:49.251 13:45:48 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:06:49.251 13:45:48 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:49.251 13:45:48 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # gen_conf 00:06:49.251 13:45:48 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/common.sh@31 -- # xtrace_disable 00:06:49.251 13:45:48 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:06:49.251 13:45:48 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:49.251 13:45:48 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:49.251 13:45:48 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:49.251 13:45:48 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:49.251 13:45:48 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:49.251 13:45:48 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:49.251 13:45:48 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:49.251 13:45:48 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:06:49.251 [2024-12-06 13:45:48.653069] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:06:49.251 [2024-12-06 13:45:48.653185] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61877 ] 00:06:49.511 { 00:06:49.511 "subsystems": [ 00:06:49.511 { 00:06:49.511 "subsystem": "bdev", 00:06:49.511 "config": [ 00:06:49.511 { 00:06:49.511 "params": { 00:06:49.511 "block_size": 512, 00:06:49.511 "num_blocks": 512, 00:06:49.511 "name": "malloc0" 00:06:49.511 }, 00:06:49.511 "method": "bdev_malloc_create" 00:06:49.511 }, 00:06:49.511 { 00:06:49.511 "params": { 00:06:49.511 "block_size": 512, 00:06:49.511 "num_blocks": 512, 00:06:49.511 "name": "malloc1" 00:06:49.511 }, 00:06:49.511 "method": "bdev_malloc_create" 00:06:49.511 }, 00:06:49.511 { 00:06:49.511 "method": "bdev_wait_for_examine" 00:06:49.511 } 00:06:49.511 ] 00:06:49.511 } 00:06:49.511 ] 00:06:49.511 } 00:06:49.511 [2024-12-06 13:45:48.796701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.511 [2024-12-06 13:45:48.839204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.511 [2024-12-06 13:45:48.908283] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:49.770 [2024-12-06 13:45:48.979363] spdk_dd.c:1143:dd_run: *ERROR*: --seek value too big (513) - only 512 blocks available in output 00:06:49.770 [2024-12-06 13:45:48.979707] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:49.770 [2024-12-06 13:45:49.137656] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:06:50.030 13:45:49 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # es=228 00:06:50.030 13:45:49 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:50.030 13:45:49 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@664 -- # es=100 00:06:50.030 13:45:49 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@665 -- # case "$es" in 00:06:50.030 13:45:49 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@672 -- # es=1 00:06:50.030 13:45:49 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:50.030 00:06:50.030 real 0m0.615s 00:06:50.030 user 0m0.398s 00:06:50.030 sys 0m0.180s 00:06:50.030 13:45:49 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:50.030 13:45:49 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:06:50.030 ************************************ 00:06:50.030 END TEST dd_invalid_seek 00:06:50.030 ************************************ 00:06:50.030 13:45:49 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@229 -- # run_test dd_invalid_skip invalid_skip 00:06:50.030 13:45:49 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:50.030 13:45:49 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:50.030 13:45:49 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:50.030 ************************************ 00:06:50.030 START TEST dd_invalid_skip 00:06:50.030 ************************************ 00:06:50.030 13:45:49 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1129 -- # invalid_skip 00:06:50.030 13:45:49 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@125 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:06:50.030 13:45:49 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:06:50.030 13:45:49 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # local -A method_bdev_malloc_create_0 00:06:50.030 13:45:49 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@131 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:06:50.030 13:45:49 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:06:50.030 13:45:49 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # local -A method_bdev_malloc_create_1 00:06:50.031 13:45:49 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:06:50.031 13:45:49 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@652 -- # local es=0 00:06:50.031 13:45:49 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # gen_conf 00:06:50.031 13:45:49 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:06:50.031 13:45:49 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:50.031 13:45:49 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/common.sh@31 -- # xtrace_disable 00:06:50.031 13:45:49 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:06:50.031 13:45:49 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:50.031 13:45:49 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:50.031 13:45:49 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:50.031 13:45:49 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:50.031 13:45:49 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:50.031 13:45:49 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:50.031 13:45:49 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:50.031 13:45:49 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:06:50.031 [2024-12-06 13:45:49.313550] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:06:50.031 [2024-12-06 13:45:49.313623] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61915 ] 00:06:50.031 { 00:06:50.031 "subsystems": [ 00:06:50.031 { 00:06:50.031 "subsystem": "bdev", 00:06:50.031 "config": [ 00:06:50.031 { 00:06:50.031 "params": { 00:06:50.031 "block_size": 512, 00:06:50.031 "num_blocks": 512, 00:06:50.031 "name": "malloc0" 00:06:50.031 }, 00:06:50.031 "method": "bdev_malloc_create" 00:06:50.031 }, 00:06:50.031 { 00:06:50.031 "params": { 00:06:50.031 "block_size": 512, 00:06:50.031 "num_blocks": 512, 00:06:50.031 "name": "malloc1" 00:06:50.031 }, 00:06:50.031 "method": "bdev_malloc_create" 00:06:50.031 }, 00:06:50.031 { 00:06:50.031 "method": "bdev_wait_for_examine" 00:06:50.031 } 00:06:50.031 ] 00:06:50.031 } 00:06:50.031 ] 00:06:50.031 } 00:06:50.291 [2024-12-06 13:45:49.451666] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.291 [2024-12-06 13:45:49.500690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.291 [2024-12-06 13:45:49.571842] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:50.291 [2024-12-06 13:45:49.643940] spdk_dd.c:1100:dd_run: *ERROR*: --skip value too big (513) - only 512 blocks available in input 00:06:50.291 [2024-12-06 13:45:49.644012] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:50.550 [2024-12-06 13:45:49.805408] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:06:50.550 13:45:49 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # es=228 00:06:50.550 13:45:49 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:50.550 13:45:49 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@664 -- # es=100 00:06:50.550 ************************************ 00:06:50.550 END TEST dd_invalid_skip 00:06:50.550 ************************************ 00:06:50.550 13:45:49 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@665 -- # case "$es" in 00:06:50.551 13:45:49 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@672 -- # es=1 00:06:50.551 13:45:49 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:50.551 00:06:50.551 real 0m0.608s 00:06:50.551 user 0m0.380s 00:06:50.551 sys 0m0.183s 00:06:50.551 13:45:49 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:50.551 13:45:49 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:06:50.551 13:45:49 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@230 -- # run_test dd_invalid_input_count invalid_input_count 00:06:50.551 13:45:49 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:50.551 13:45:49 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:50.551 13:45:49 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:50.551 ************************************ 00:06:50.551 START TEST dd_invalid_input_count 00:06:50.551 ************************************ 00:06:50.551 13:45:49 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1129 -- # invalid_input_count 00:06:50.551 13:45:49 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@149 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:06:50.551 13:45:49 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:06:50.551 13:45:49 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # local -A method_bdev_malloc_create_0 00:06:50.551 13:45:49 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@155 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:06:50.551 13:45:49 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:06:50.551 13:45:49 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # local -A method_bdev_malloc_create_1 00:06:50.551 13:45:49 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:06:50.551 13:45:49 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@652 -- # local es=0 00:06:50.551 13:45:49 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:06:50.551 13:45:49 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:50.551 13:45:49 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # gen_conf 00:06:50.551 13:45:49 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/common.sh@31 -- # xtrace_disable 00:06:50.551 13:45:49 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:06:50.551 13:45:49 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:50.551 13:45:49 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:50.551 13:45:49 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:50.551 13:45:49 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:50.551 13:45:49 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:50.551 13:45:49 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:50.551 13:45:49 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:50.551 13:45:49 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:06:50.810 { 00:06:50.810 "subsystems": [ 00:06:50.810 { 00:06:50.810 "subsystem": "bdev", 00:06:50.810 "config": [ 00:06:50.810 { 00:06:50.810 "params": { 00:06:50.810 "block_size": 512, 00:06:50.810 "num_blocks": 512, 00:06:50.810 "name": "malloc0" 00:06:50.810 }, 00:06:50.810 "method": "bdev_malloc_create" 00:06:50.810 }, 00:06:50.810 { 00:06:50.810 "params": { 00:06:50.810 "block_size": 512, 00:06:50.810 "num_blocks": 512, 00:06:50.810 "name": "malloc1" 00:06:50.810 }, 00:06:50.810 "method": "bdev_malloc_create" 00:06:50.810 }, 00:06:50.810 { 00:06:50.810 "method": "bdev_wait_for_examine" 00:06:50.810 } 00:06:50.810 ] 00:06:50.810 } 00:06:50.810 ] 00:06:50.810 } 00:06:50.810 [2024-12-06 13:45:49.992639] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:06:50.810 [2024-12-06 13:45:49.992735] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61950 ] 00:06:50.810 [2024-12-06 13:45:50.136582] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.810 [2024-12-06 13:45:50.180089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.070 [2024-12-06 13:45:50.248499] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:51.070 [2024-12-06 13:45:50.319782] spdk_dd.c:1108:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available from input 00:06:51.070 [2024-12-06 13:45:50.319841] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:51.330 [2024-12-06 13:45:50.477047] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:06:51.330 13:45:50 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # es=228 00:06:51.330 13:45:50 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:51.330 13:45:50 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@664 -- # es=100 00:06:51.330 13:45:50 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@665 -- # case "$es" in 00:06:51.330 13:45:50 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@672 -- # es=1 00:06:51.330 13:45:50 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:51.330 00:06:51.330 real 0m0.614s 00:06:51.330 user 0m0.398s 00:06:51.330 sys 0m0.175s 00:06:51.330 ************************************ 00:06:51.330 END TEST dd_invalid_input_count 00:06:51.330 ************************************ 00:06:51.330 13:45:50 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:51.330 13:45:50 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:06:51.330 13:45:50 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@231 -- # run_test dd_invalid_output_count invalid_output_count 00:06:51.330 13:45:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:51.330 13:45:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.330 13:45:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:51.330 ************************************ 00:06:51.330 START TEST dd_invalid_output_count 00:06:51.330 ************************************ 00:06:51.330 13:45:50 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1129 -- # invalid_output_count 00:06:51.330 13:45:50 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@173 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:06:51.330 13:45:50 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:06:51.330 13:45:50 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # local -A method_bdev_malloc_create_0 00:06:51.330 13:45:50 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:06:51.330 13:45:50 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@652 -- # local es=0 00:06:51.330 13:45:50 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:06:51.330 13:45:50 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:51.330 13:45:50 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # gen_conf 00:06:51.330 13:45:50 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/common.sh@31 -- # xtrace_disable 00:06:51.330 13:45:50 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:06:51.330 13:45:50 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:51.330 13:45:50 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:51.330 13:45:50 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:51.330 13:45:50 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:51.330 13:45:50 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:51.330 13:45:50 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:51.330 13:45:50 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:51.330 13:45:50 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:06:51.330 { 00:06:51.330 "subsystems": [ 00:06:51.330 { 00:06:51.330 "subsystem": "bdev", 00:06:51.330 "config": [ 00:06:51.330 { 00:06:51.330 "params": { 00:06:51.330 "block_size": 512, 00:06:51.330 "num_blocks": 512, 00:06:51.330 "name": "malloc0" 00:06:51.330 }, 00:06:51.330 "method": "bdev_malloc_create" 00:06:51.330 }, 00:06:51.330 { 00:06:51.330 "method": "bdev_wait_for_examine" 00:06:51.330 } 00:06:51.330 ] 00:06:51.330 } 00:06:51.330 ] 00:06:51.330 } 00:06:51.330 [2024-12-06 13:45:50.660603] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:06:51.330 [2024-12-06 13:45:50.660700] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61983 ] 00:06:51.589 [2024-12-06 13:45:50.804843] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.589 [2024-12-06 13:45:50.847805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.589 [2024-12-06 13:45:50.918276] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:51.589 [2024-12-06 13:45:50.981935] spdk_dd.c:1150:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available in output 00:06:51.589 [2024-12-06 13:45:50.981997] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:51.847 [2024-12-06 13:45:51.142279] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:06:51.847 13:45:51 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # es=228 00:06:51.847 13:45:51 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:51.847 13:45:51 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@664 -- # es=100 00:06:51.847 13:45:51 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@665 -- # case "$es" in 00:06:51.847 13:45:51 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@672 -- # es=1 00:06:51.847 13:45:51 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:51.847 00:06:51.847 real 0m0.611s 00:06:51.847 user 0m0.384s 00:06:51.847 sys 0m0.183s 00:06:51.847 13:45:51 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:51.847 13:45:51 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:06:51.847 ************************************ 00:06:51.847 END TEST dd_invalid_output_count 00:06:51.847 ************************************ 00:06:52.106 13:45:51 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@232 -- # run_test dd_bs_not_multiple bs_not_multiple 00:06:52.106 13:45:51 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:52.106 13:45:51 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:52.106 13:45:51 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:52.106 ************************************ 00:06:52.106 START TEST dd_bs_not_multiple 00:06:52.106 ************************************ 00:06:52.106 13:45:51 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1129 -- # bs_not_multiple 00:06:52.106 13:45:51 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@190 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:06:52.106 13:45:51 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:06:52.106 13:45:51 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # local -A method_bdev_malloc_create_0 00:06:52.106 13:45:51 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@196 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:06:52.106 13:45:51 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:06:52.106 13:45:51 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # local -A method_bdev_malloc_create_1 00:06:52.106 13:45:51 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:06:52.106 13:45:51 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@652 -- # local es=0 00:06:52.106 13:45:51 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:06:52.106 13:45:51 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.106 13:45:51 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # gen_conf 00:06:52.106 13:45:51 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/common.sh@31 -- # xtrace_disable 00:06:52.106 13:45:51 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:06:52.106 13:45:51 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:52.106 13:45:51 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.106 13:45:51 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:52.106 13:45:51 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.106 13:45:51 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:52.106 13:45:51 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.106 13:45:51 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:52.106 13:45:51 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:06:52.106 [2024-12-06 13:45:51.314444] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:06:52.106 [2024-12-06 13:45:51.314518] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62015 ] 00:06:52.106 { 00:06:52.106 "subsystems": [ 00:06:52.106 { 00:06:52.106 "subsystem": "bdev", 00:06:52.106 "config": [ 00:06:52.106 { 00:06:52.106 "params": { 00:06:52.106 "block_size": 512, 00:06:52.106 "num_blocks": 512, 00:06:52.106 "name": "malloc0" 00:06:52.106 }, 00:06:52.106 "method": "bdev_malloc_create" 00:06:52.106 }, 00:06:52.106 { 00:06:52.106 "params": { 00:06:52.106 "block_size": 512, 00:06:52.106 "num_blocks": 512, 00:06:52.106 "name": "malloc1" 00:06:52.106 }, 00:06:52.106 "method": "bdev_malloc_create" 00:06:52.106 }, 00:06:52.106 { 00:06:52.106 "method": "bdev_wait_for_examine" 00:06:52.106 } 00:06:52.106 ] 00:06:52.106 } 00:06:52.106 ] 00:06:52.106 } 00:06:52.106 [2024-12-06 13:45:51.455686] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.106 [2024-12-06 13:45:51.504362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.364 [2024-12-06 13:45:51.575617] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:52.364 [2024-12-06 13:45:51.647401] spdk_dd.c:1166:dd_run: *ERROR*: --bs value must be a multiple of input native block size (512) 00:06:52.364 [2024-12-06 13:45:51.647777] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:52.622 [2024-12-06 13:45:51.809356] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:06:52.622 13:45:51 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # es=234 00:06:52.622 13:45:51 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:52.622 13:45:51 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@664 -- # es=106 00:06:52.622 13:45:51 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@665 -- # case "$es" in 00:06:52.622 13:45:51 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@672 -- # es=1 00:06:52.622 ************************************ 00:06:52.622 END TEST dd_bs_not_multiple 00:06:52.622 ************************************ 00:06:52.622 13:45:51 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:52.622 00:06:52.622 real 0m0.615s 00:06:52.622 user 0m0.377s 00:06:52.622 sys 0m0.195s 00:06:52.622 13:45:51 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:52.622 13:45:51 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:06:52.622 ************************************ 00:06:52.622 END TEST spdk_dd_negative 00:06:52.622 ************************************ 00:06:52.622 00:06:52.622 real 0m7.014s 00:06:52.622 user 0m3.685s 00:06:52.622 sys 0m2.697s 00:06:52.622 13:45:51 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:52.622 13:45:51 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:52.622 ************************************ 00:06:52.622 END TEST spdk_dd 00:06:52.622 ************************************ 00:06:52.622 00:06:52.622 real 1m21.244s 00:06:52.622 user 0m50.875s 00:06:52.622 sys 0m38.979s 00:06:52.622 13:45:51 spdk_dd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:52.622 13:45:51 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:52.622 13:45:52 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:52.622 13:45:52 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:52.622 13:45:52 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:52.622 13:45:52 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:52.622 13:45:52 -- common/autotest_common.sh@10 -- # set +x 00:06:52.881 13:45:52 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:52.881 13:45:52 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:06:52.881 13:45:52 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:06:52.881 13:45:52 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:06:52.881 13:45:52 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:06:52.881 13:45:52 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:06:52.881 13:45:52 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:52.881 13:45:52 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:52.881 13:45:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:52.881 13:45:52 -- common/autotest_common.sh@10 -- # set +x 00:06:52.881 ************************************ 00:06:52.881 START TEST nvmf_tcp 00:06:52.881 ************************************ 00:06:52.881 13:45:52 nvmf_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:52.881 * Looking for test storage... 00:06:52.881 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:06:52.881 13:45:52 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:52.881 13:45:52 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:06:52.881 13:45:52 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:52.881 13:45:52 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:52.881 13:45:52 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:52.881 13:45:52 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:52.881 13:45:52 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:52.881 13:45:52 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:52.881 13:45:52 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:52.881 13:45:52 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:52.881 13:45:52 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:52.881 13:45:52 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:52.881 13:45:52 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:52.881 13:45:52 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:52.881 13:45:52 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:52.881 13:45:52 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:52.881 13:45:52 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:06:52.881 13:45:52 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:52.881 13:45:52 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:52.881 13:45:52 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:52.881 13:45:52 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:06:52.881 13:45:52 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:52.881 13:45:52 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:06:52.881 13:45:52 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:52.881 13:45:52 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:52.881 13:45:52 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:06:52.881 13:45:52 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:52.881 13:45:52 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:06:52.881 13:45:52 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:52.881 13:45:52 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:52.881 13:45:52 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:52.881 13:45:52 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:06:52.881 13:45:52 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:52.881 13:45:52 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:52.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.881 --rc genhtml_branch_coverage=1 00:06:52.881 --rc genhtml_function_coverage=1 00:06:52.881 --rc genhtml_legend=1 00:06:52.881 --rc geninfo_all_blocks=1 00:06:52.881 --rc geninfo_unexecuted_blocks=1 00:06:52.881 00:06:52.881 ' 00:06:52.881 13:45:52 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:52.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.881 --rc genhtml_branch_coverage=1 00:06:52.881 --rc genhtml_function_coverage=1 00:06:52.881 --rc genhtml_legend=1 00:06:52.881 --rc geninfo_all_blocks=1 00:06:52.881 --rc geninfo_unexecuted_blocks=1 00:06:52.881 00:06:52.881 ' 00:06:52.881 13:45:52 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:52.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.881 --rc genhtml_branch_coverage=1 00:06:52.881 --rc genhtml_function_coverage=1 00:06:52.881 --rc genhtml_legend=1 00:06:52.881 --rc geninfo_all_blocks=1 00:06:52.881 --rc geninfo_unexecuted_blocks=1 00:06:52.881 00:06:52.881 ' 00:06:52.881 13:45:52 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:52.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.881 --rc genhtml_branch_coverage=1 00:06:52.881 --rc genhtml_function_coverage=1 00:06:52.881 --rc genhtml_legend=1 00:06:52.881 --rc geninfo_all_blocks=1 00:06:52.881 --rc geninfo_unexecuted_blocks=1 00:06:52.881 00:06:52.881 ' 00:06:52.881 13:45:52 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:52.881 13:45:52 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:52.881 13:45:52 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:52.881 13:45:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:52.881 13:45:52 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:52.881 13:45:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:52.881 ************************************ 00:06:52.881 START TEST nvmf_target_core 00:06:52.881 ************************************ 00:06:52.881 13:45:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:53.141 * Looking for test storage... 00:06:53.141 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:06:53.141 13:45:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:53.141 13:45:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:06:53.141 13:45:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:53.141 13:45:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:53.141 13:45:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:53.141 13:45:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:53.141 13:45:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:53.141 13:45:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:06:53.141 13:45:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:06:53.141 13:45:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:06:53.141 13:45:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:06:53.141 13:45:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:06:53.141 13:45:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:06:53.141 13:45:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:06:53.141 13:45:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:53.141 13:45:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:06:53.141 13:45:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:06:53.141 13:45:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:53.141 13:45:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:53.141 13:45:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:06:53.141 13:45:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:06:53.141 13:45:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:53.141 13:45:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:06:53.141 13:45:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:06:53.141 13:45:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:06:53.141 13:45:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:06:53.141 13:45:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:53.141 13:45:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:06:53.141 13:45:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:06:53.141 13:45:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:53.141 13:45:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:53.141 13:45:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:06:53.141 13:45:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:53.141 13:45:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:53.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.141 --rc genhtml_branch_coverage=1 00:06:53.141 --rc genhtml_function_coverage=1 00:06:53.141 --rc genhtml_legend=1 00:06:53.141 --rc geninfo_all_blocks=1 00:06:53.141 --rc geninfo_unexecuted_blocks=1 00:06:53.141 00:06:53.141 ' 00:06:53.141 13:45:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:53.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.141 --rc genhtml_branch_coverage=1 00:06:53.141 --rc genhtml_function_coverage=1 00:06:53.141 --rc genhtml_legend=1 00:06:53.141 --rc geninfo_all_blocks=1 00:06:53.141 --rc geninfo_unexecuted_blocks=1 00:06:53.141 00:06:53.141 ' 00:06:53.141 13:45:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:53.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.141 --rc genhtml_branch_coverage=1 00:06:53.141 --rc genhtml_function_coverage=1 00:06:53.141 --rc genhtml_legend=1 00:06:53.141 --rc geninfo_all_blocks=1 00:06:53.141 --rc geninfo_unexecuted_blocks=1 00:06:53.141 00:06:53.141 ' 00:06:53.141 13:45:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:53.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.141 --rc genhtml_branch_coverage=1 00:06:53.141 --rc genhtml_function_coverage=1 00:06:53.141 --rc genhtml_legend=1 00:06:53.141 --rc geninfo_all_blocks=1 00:06:53.142 --rc geninfo_unexecuted_blocks=1 00:06:53.142 00:06:53.142 ' 00:06:53.142 13:45:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:53.142 13:45:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:53.142 13:45:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:53.142 13:45:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:53.142 13:45:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:53.142 13:45:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:53.142 13:45:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:53.142 13:45:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:53.142 13:45:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:53.142 13:45:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:53.142 13:45:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:53.142 13:45:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:53.142 13:45:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:53.142 13:45:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:53.142 13:45:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:06:53.142 13:45:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=cfa2def7-c8af-457f-82a0-b312efdea7f4 00:06:53.142 13:45:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:53.142 13:45:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:53.142 13:45:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:53.142 13:45:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:53.142 13:45:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:53.142 13:45:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:06:53.142 13:45:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:53.142 13:45:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:53.142 13:45:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:53.142 13:45:52 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.142 13:45:52 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.142 13:45:52 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.142 13:45:52 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:53.142 13:45:52 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.142 13:45:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:06:53.142 13:45:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:53.142 13:45:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:53.142 13:45:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:53.142 13:45:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:53.142 13:45:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:53.142 13:45:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:53.142 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:53.142 13:45:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:53.142 13:45:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:53.142 13:45:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:53.142 13:45:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:53.142 13:45:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:53.142 13:45:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 1 -eq 0 ]] 00:06:53.142 13:45:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:53.142 13:45:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:53.142 13:45:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:53.142 13:45:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:53.142 ************************************ 00:06:53.142 START TEST nvmf_host_management 00:06:53.142 ************************************ 00:06:53.142 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:53.403 * Looking for test storage... 00:06:53.403 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:53.403 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:53.403 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:06:53.403 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:53.403 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:53.403 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:53.403 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:53.403 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:53.403 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:06:53.403 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:06:53.403 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:06:53.403 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:06:53.403 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:06:53.403 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:06:53.403 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:06:53.403 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:53.403 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:06:53.403 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:06:53.403 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:53.403 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:53.403 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:06:53.403 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:06:53.403 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:53.403 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:06:53.403 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:06:53.403 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:06:53.403 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:06:53.403 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:53.403 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:06:53.403 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:06:53.403 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:53.403 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:53.403 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:06:53.403 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:53.403 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:53.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.403 --rc genhtml_branch_coverage=1 00:06:53.403 --rc genhtml_function_coverage=1 00:06:53.403 --rc genhtml_legend=1 00:06:53.403 --rc geninfo_all_blocks=1 00:06:53.403 --rc geninfo_unexecuted_blocks=1 00:06:53.403 00:06:53.403 ' 00:06:53.403 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:53.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.403 --rc genhtml_branch_coverage=1 00:06:53.403 --rc genhtml_function_coverage=1 00:06:53.403 --rc genhtml_legend=1 00:06:53.403 --rc geninfo_all_blocks=1 00:06:53.403 --rc geninfo_unexecuted_blocks=1 00:06:53.403 00:06:53.403 ' 00:06:53.403 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:53.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.403 --rc genhtml_branch_coverage=1 00:06:53.403 --rc genhtml_function_coverage=1 00:06:53.403 --rc genhtml_legend=1 00:06:53.403 --rc geninfo_all_blocks=1 00:06:53.403 --rc geninfo_unexecuted_blocks=1 00:06:53.403 00:06:53.403 ' 00:06:53.403 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:53.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.403 --rc genhtml_branch_coverage=1 00:06:53.403 --rc genhtml_function_coverage=1 00:06:53.403 --rc genhtml_legend=1 00:06:53.403 --rc geninfo_all_blocks=1 00:06:53.403 --rc geninfo_unexecuted_blocks=1 00:06:53.403 00:06:53.403 ' 00:06:53.403 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:53.403 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:53.403 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:53.403 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:53.403 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:53.403 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:53.403 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:53.403 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:53.403 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:53.403 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:53.403 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:53.403 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:53.403 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:06:53.403 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=cfa2def7-c8af-457f-82a0-b312efdea7f4 00:06:53.403 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:53.403 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:53.403 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:53.403 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:53.403 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:53.403 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:06:53.403 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:53.403 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:53.403 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:53.403 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.403 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.403 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.403 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:53.404 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.404 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:06:53.404 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:53.404 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:53.404 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:53.404 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:53.404 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:53.404 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:53.404 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:53.404 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:53.404 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:53.404 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:53.404 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:53.404 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:53.404 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:53.404 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:53.404 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:53.404 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:53.404 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:53.404 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:53.404 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:53.404 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:53.404 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:53.404 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:06:53.404 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:06:53.404 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:06:53.404 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:06:53.404 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:06:53.404 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@460 -- # nvmf_veth_init 00:06:53.404 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:53.404 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:06:53.404 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:06:53.404 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:06:53.404 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:53.404 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:06:53.404 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:53.404 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:06:53.404 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:53.404 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:06:53.404 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:53.404 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:53.404 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:53.404 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:53.404 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:53.404 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:53.404 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:06:53.404 Cannot find device "nvmf_init_br" 00:06:53.404 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:06:53.404 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:06:53.404 Cannot find device "nvmf_init_br2" 00:06:53.404 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:06:53.404 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:06:53.404 Cannot find device "nvmf_tgt_br" 00:06:53.404 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:06:53.404 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:06:53.404 Cannot find device "nvmf_tgt_br2" 00:06:53.404 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:06:53.404 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:06:53.404 Cannot find device "nvmf_init_br" 00:06:53.404 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:06:53.404 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:06:53.404 Cannot find device "nvmf_init_br2" 00:06:53.404 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:06:53.404 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:06:53.404 Cannot find device "nvmf_tgt_br" 00:06:53.404 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:06:53.404 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:06:53.708 Cannot find device "nvmf_tgt_br2" 00:06:53.708 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:06:53.708 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:06:53.708 Cannot find device "nvmf_br" 00:06:53.708 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:06:53.708 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:06:53.708 Cannot find device "nvmf_init_if" 00:06:53.708 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:06:53.708 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:06:53.708 Cannot find device "nvmf_init_if2" 00:06:53.708 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:06:53.708 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:53.708 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:53.708 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:06:53.708 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:53.708 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:53.708 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:06:53.708 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:06:53.708 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:53.708 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:06:53.708 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:53.708 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:53.708 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:53.708 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:53.708 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:53.708 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:06:53.708 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:06:53.708 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:06:53.708 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:06:53.708 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:06:53.708 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:06:53.708 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:06:53.708 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:06:53.708 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:06:53.708 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:53.708 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:53.708 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:53.708 13:45:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:06:53.708 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:06:53.708 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:06:53.708 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:06:53.708 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:53.708 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:53.708 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:53.708 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:06:53.966 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:06:53.966 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:06:53.966 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:53.966 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:06:53.966 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:06:53.966 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:53.966 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.143 ms 00:06:53.966 00:06:53.966 --- 10.0.0.3 ping statistics --- 00:06:53.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:53.966 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:06:53.966 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:06:53.966 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:06:53.966 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.107 ms 00:06:53.966 00:06:53.966 --- 10.0.0.4 ping statistics --- 00:06:53.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:53.966 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:06:53.967 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:53.967 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:53.967 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:06:53.967 00:06:53.967 --- 10.0.0.1 ping statistics --- 00:06:53.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:53.967 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:06:53.967 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:06:53.967 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:53.967 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:06:53.967 00:06:53.967 --- 10.0.0.2 ping statistics --- 00:06:53.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:53.967 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:06:53.967 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:53.967 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@461 -- # return 0 00:06:53.967 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:53.967 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:53.967 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:53.967 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:53.967 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:53.967 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:53.967 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:53.967 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:53.967 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:53.967 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:53.967 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:53.967 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:53.967 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:53.967 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=62359 00:06:53.967 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 62359 00:06:53.967 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:53.967 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 62359 ']' 00:06:53.967 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.967 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:53.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.967 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.967 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:53.967 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:53.967 [2024-12-06 13:45:53.288142] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:06:53.967 [2024-12-06 13:45:53.288242] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:54.226 [2024-12-06 13:45:53.441717] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:54.226 [2024-12-06 13:45:53.508712] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:54.226 [2024-12-06 13:45:53.508759] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:54.226 [2024-12-06 13:45:53.508786] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:54.226 [2024-12-06 13:45:53.508795] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:54.226 [2024-12-06 13:45:53.508802] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:54.226 [2024-12-06 13:45:53.509981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:54.226 [2024-12-06 13:45:53.510137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:54.226 [2024-12-06 13:45:53.510162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:54.226 [2024-12-06 13:45:53.510166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:54.226 [2024-12-06 13:45:53.571759] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:54.485 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:54.485 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:54.485 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:54.485 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:54.485 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:54.485 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:54.485 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:54.485 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.485 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:54.485 [2024-12-06 13:45:53.695728] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:54.485 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.485 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:54.485 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:54.485 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:54.485 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:06:54.485 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:54.485 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:54.485 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.485 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:54.485 Malloc0 00:06:54.485 [2024-12-06 13:45:53.780781] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:06:54.485 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.485 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:54.485 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:54.485 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:54.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:54.485 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=62405 00:06:54.485 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 62405 /var/tmp/bdevperf.sock 00:06:54.485 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 62405 ']' 00:06:54.485 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:54.485 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:54.485 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:54.485 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:54.485 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:54.485 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:54.485 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:54.485 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:54.485 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:54.485 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:54.485 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:54.485 { 00:06:54.485 "params": { 00:06:54.485 "name": "Nvme$subsystem", 00:06:54.485 "trtype": "$TEST_TRANSPORT", 00:06:54.485 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:54.485 "adrfam": "ipv4", 00:06:54.486 "trsvcid": "$NVMF_PORT", 00:06:54.486 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:54.486 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:54.486 "hdgst": ${hdgst:-false}, 00:06:54.486 "ddgst": ${ddgst:-false} 00:06:54.486 }, 00:06:54.486 "method": "bdev_nvme_attach_controller" 00:06:54.486 } 00:06:54.486 EOF 00:06:54.486 )") 00:06:54.486 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:54.486 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:54.486 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:54.486 13:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:54.486 "params": { 00:06:54.486 "name": "Nvme0", 00:06:54.486 "trtype": "tcp", 00:06:54.486 "traddr": "10.0.0.3", 00:06:54.486 "adrfam": "ipv4", 00:06:54.486 "trsvcid": "4420", 00:06:54.486 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:54.486 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:54.486 "hdgst": false, 00:06:54.486 "ddgst": false 00:06:54.486 }, 00:06:54.486 "method": "bdev_nvme_attach_controller" 00:06:54.486 }' 00:06:54.744 [2024-12-06 13:45:53.893387] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:06:54.744 [2024-12-06 13:45:53.893493] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62405 ] 00:06:54.744 [2024-12-06 13:45:54.046219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.744 [2024-12-06 13:45:54.112341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.002 [2024-12-06 13:45:54.204159] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:55.002 Running I/O for 10 seconds... 00:06:55.002 13:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:55.002 13:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:55.002 13:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:55.002 13:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.002 13:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:55.260 13:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.260 13:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:55.260 13:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:55.260 13:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:55.260 13:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:55.260 13:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:06:55.260 13:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:06:55.260 13:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:55.260 13:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:55.260 13:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:55.260 13:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:55.260 13:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.260 13:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:55.260 13:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.260 13:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:06:55.260 13:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:06:55.260 13:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:06:55.521 13:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:06:55.521 13:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:55.521 13:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:55.521 13:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:55.521 13:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.521 13:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:55.521 13:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.521 13:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:06:55.521 13:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:06:55.521 13:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:06:55.521 13:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:06:55.521 13:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:06:55.521 13:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:55.521 13:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.521 13:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:55.521 [2024-12-06 13:45:54.799736] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:55.521 [2024-12-06 13:45:54.799803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:55.521 [2024-12-06 13:45:54.799817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:06:55.521 [2024-12-06 13:45:54.799826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:55.521 [2024-12-06 13:45:54.799836] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:06:55.521 [2024-12-06 13:45:54.799845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:55.521 [2024-12-06 13:45:54.799854] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:06:55.521 [2024-12-06 13:45:54.799862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:55.521 [2024-12-06 13:45:54.799871] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247c9d0 is same with the state(6) to be set 00:06:55.521 13:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.521 13:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:55.521 13:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.521 13:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:55.521 13:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.521 13:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:06:55.521 [2024-12-06 13:45:54.818263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.521 [2024-12-06 13:45:54.818299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:55.521 [2024-12-06 13:45:54.818318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:90240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.521 [2024-12-06 13:45:54.818327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:55.521 [2024-12-06 13:45:54.818338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:90368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.521 [2024-12-06 13:45:54.818346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:55.521 [2024-12-06 13:45:54.818356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:90496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.521 [2024-12-06 13:45:54.818365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:55.521 [2024-12-06 13:45:54.818374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:90624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.521 [2024-12-06 13:45:54.818383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:55.521 [2024-12-06 13:45:54.818399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:90752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.521 [2024-12-06 13:45:54.818407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:55.521 [2024-12-06 13:45:54.818426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:90880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.521 [2024-12-06 13:45:54.818435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:55.521 [2024-12-06 13:45:54.818456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:91008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.521 [2024-12-06 13:45:54.818464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:55.521 [2024-12-06 13:45:54.818474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:91136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.521 [2024-12-06 13:45:54.818483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:55.521 [2024-12-06 13:45:54.818503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:91264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.521 [2024-12-06 13:45:54.818521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:55.521 [2024-12-06 13:45:54.818530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:91392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.521 [2024-12-06 13:45:54.818538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:55.521 [2024-12-06 13:45:54.818547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:91520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.521 [2024-12-06 13:45:54.818555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:55.521 [2024-12-06 13:45:54.818576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:91648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.521 [2024-12-06 13:45:54.818584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:55.521 [2024-12-06 13:45:54.818603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:91776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.521 [2024-12-06 13:45:54.818635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:55.521 [2024-12-06 13:45:54.818655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:91904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.521 [2024-12-06 13:45:54.818662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:55.521 [2024-12-06 13:45:54.818674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:92032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.521 [2024-12-06 13:45:54.818682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:55.521 [2024-12-06 13:45:54.818692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:92160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.522 [2024-12-06 13:45:54.818700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:55.522 [2024-12-06 13:45:54.818712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:92288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.522 [2024-12-06 13:45:54.818720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:55.522 [2024-12-06 13:45:54.818729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:92416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.522 [2024-12-06 13:45:54.818738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:55.522 [2024-12-06 13:45:54.818747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:92544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.522 [2024-12-06 13:45:54.818755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:55.522 [2024-12-06 13:45:54.818764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:92672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.522 [2024-12-06 13:45:54.818781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:55.522 [2024-12-06 13:45:54.818790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:92800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.522 [2024-12-06 13:45:54.818798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:55.522 [2024-12-06 13:45:54.818812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:92928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.522 [2024-12-06 13:45:54.818827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:55.522 [2024-12-06 13:45:54.818848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:93056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.522 [2024-12-06 13:45:54.818856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:55.522 [2024-12-06 13:45:54.818875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:93184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.522 [2024-12-06 13:45:54.818883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:55.522 [2024-12-06 13:45:54.818893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:93312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.522 [2024-12-06 13:45:54.818901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:55.522 [2024-12-06 13:45:54.818910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:93440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.522 [2024-12-06 13:45:54.818917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:55.522 [2024-12-06 13:45:54.818926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:93568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.522 [2024-12-06 13:45:54.818934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:55.522 [2024-12-06 13:45:54.818943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:93696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.522 [2024-12-06 13:45:54.818951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:55.522 [2024-12-06 13:45:54.818960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:93824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.522 [2024-12-06 13:45:54.818968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:55.522 [2024-12-06 13:45:54.818978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:93952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.522 [2024-12-06 13:45:54.818985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:55.522 [2024-12-06 13:45:54.818995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:94080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.522 [2024-12-06 13:45:54.819002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:55.522 [2024-12-06 13:45:54.819012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:94208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.522 [2024-12-06 13:45:54.819020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:55.522 [2024-12-06 13:45:54.819029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:94336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.522 [2024-12-06 13:45:54.819038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:55.522 [2024-12-06 13:45:54.819047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:94464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.522 [2024-12-06 13:45:54.819055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:55.522 [2024-12-06 13:45:54.819065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:94592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.522 [2024-12-06 13:45:54.819072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:55.522 [2024-12-06 13:45:54.819082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:94720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.522 [2024-12-06 13:45:54.819090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:55.522 [2024-12-06 13:45:54.819099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:94848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.522 [2024-12-06 13:45:54.819107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:55.522 [2024-12-06 13:45:54.819145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:94976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.522 [2024-12-06 13:45:54.819157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:55.522 [2024-12-06 13:45:54.819167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:95104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.522 [2024-12-06 13:45:54.819175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:55.522 [2024-12-06 13:45:54.819184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:95232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.522 [2024-12-06 13:45:54.819192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:55.522 [2024-12-06 13:45:54.819201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:95360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.522 [2024-12-06 13:45:54.819209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:55.522 [2024-12-06 13:45:54.819218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:95488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.522 [2024-12-06 13:45:54.819226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:55.522 [2024-12-06 13:45:54.819235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:95616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.522 [2024-12-06 13:45:54.819243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:55.522 [2024-12-06 13:45:54.819252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:95744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.522 [2024-12-06 13:45:54.819260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:55.522 [2024-12-06 13:45:54.819269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:95872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.522 [2024-12-06 13:45:54.819277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:55.522 [2024-12-06 13:45:54.819286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:96000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.522 [2024-12-06 13:45:54.819294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:55.522 [2024-12-06 13:45:54.819303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.522 [2024-12-06 13:45:54.819311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:55.522 [2024-12-06 13:45:54.819327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:96256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.522 [2024-12-06 13:45:54.819335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:55.522 [2024-12-06 13:45:54.819344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.522 [2024-12-06 13:45:54.819352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:55.522 [2024-12-06 13:45:54.819361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.522 [2024-12-06 13:45:54.819369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:55.522 [2024-12-06 13:45:54.819379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:96640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.522 [2024-12-06 13:45:54.819387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:55.522 [2024-12-06 13:45:54.819396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.522 [2024-12-06 13:45:54.819404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:55.522 [2024-12-06 13:45:54.819412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.522 [2024-12-06 13:45:54.819435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:55.522 [2024-12-06 13:45:54.819452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.522 [2024-12-06 13:45:54.819460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:55.522 [2024-12-06 13:45:54.819470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:97152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.522 [2024-12-06 13:45:54.819478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:55.522 [2024-12-06 13:45:54.819487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:97280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.522 [2024-12-06 13:45:54.819496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:55.522 [2024-12-06 13:45:54.819505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:97408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.522 [2024-12-06 13:45:54.819512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:55.522 [2024-12-06 13:45:54.819522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:97536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.522 [2024-12-06 13:45:54.819530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:55.522 [2024-12-06 13:45:54.819539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:97664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.522 [2024-12-06 13:45:54.819547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:55.522 [2024-12-06 13:45:54.819557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:97792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.522 [2024-12-06 13:45:54.819565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:55.522 [2024-12-06 13:45:54.819574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:97920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.522 [2024-12-06 13:45:54.819582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:55.522 [2024-12-06 13:45:54.819593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.522 [2024-12-06 13:45:54.819601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:55.522 [2024-12-06 13:45:54.819635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.522 [2024-12-06 13:45:54.819646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:55.522 [2024-12-06 13:45:54.819663] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2480e30 is same with the state(6) to be set 00:06:55.522 [2024-12-06 13:45:54.819842] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x247c9d0 (9): Bad file descriptor 00:06:55.522 task offset: 90112 on job bdev=Nvme0n1 fails 00:06:55.522 00:06:55.522 Latency(us) 00:06:55.522 [2024-12-06T13:45:54.926Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:55.522 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:55.522 Job: Nvme0n1 ended in about 0.47 seconds with error 00:06:55.523 Verification LBA range: start 0x0 length 0x400 00:06:55.523 Nvme0n1 : 0.47 1496.13 93.51 136.01 0.00 37685.69 2010.76 46947.61 00:06:55.523 [2024-12-06T13:45:54.927Z] =================================================================================================================== 00:06:55.523 [2024-12-06T13:45:54.927Z] Total : 1496.13 93.51 136.01 0.00 37685.69 2010.76 46947.61 00:06:55.523 [2024-12-06 13:45:54.820873] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:06:55.523 [2024-12-06 13:45:54.822533] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:55.523 [2024-12-06 13:45:54.834289] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:06:56.458 13:45:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 62405 00:06:56.458 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (62405) - No such process 00:06:56.458 13:45:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:06:56.458 13:45:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:06:56.458 13:45:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:06:56.458 13:45:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:06:56.458 13:45:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:56.458 13:45:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:56.458 13:45:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:56.458 13:45:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:56.458 { 00:06:56.458 "params": { 00:06:56.458 "name": "Nvme$subsystem", 00:06:56.458 "trtype": "$TEST_TRANSPORT", 00:06:56.458 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:56.458 "adrfam": "ipv4", 00:06:56.458 "trsvcid": "$NVMF_PORT", 00:06:56.458 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:56.458 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:56.458 "hdgst": ${hdgst:-false}, 00:06:56.458 "ddgst": ${ddgst:-false} 00:06:56.458 }, 00:06:56.458 "method": "bdev_nvme_attach_controller" 00:06:56.458 } 00:06:56.458 EOF 00:06:56.458 )") 00:06:56.458 13:45:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:56.458 13:45:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:56.458 13:45:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:56.458 13:45:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:56.458 "params": { 00:06:56.458 "name": "Nvme0", 00:06:56.458 "trtype": "tcp", 00:06:56.458 "traddr": "10.0.0.3", 00:06:56.458 "adrfam": "ipv4", 00:06:56.458 "trsvcid": "4420", 00:06:56.458 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:56.458 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:56.458 "hdgst": false, 00:06:56.458 "ddgst": false 00:06:56.458 }, 00:06:56.458 "method": "bdev_nvme_attach_controller" 00:06:56.458 }' 00:06:56.717 [2024-12-06 13:45:55.872838] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:06:56.717 [2024-12-06 13:45:55.872945] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62445 ] 00:06:56.717 [2024-12-06 13:45:56.014680] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.717 [2024-12-06 13:45:56.067818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.976 [2024-12-06 13:45:56.155374] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:56.976 Running I/O for 1 seconds... 00:06:57.913 1536.00 IOPS, 96.00 MiB/s 00:06:57.913 Latency(us) 00:06:57.913 [2024-12-06T13:45:57.317Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:57.913 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:57.913 Verification LBA range: start 0x0 length 0x400 00:06:57.913 Nvme0n1 : 1.00 1593.19 99.57 0.00 0.00 39417.81 5451.40 39798.23 00:06:57.913 [2024-12-06T13:45:57.317Z] =================================================================================================================== 00:06:57.913 [2024-12-06T13:45:57.317Z] Total : 1593.19 99.57 0.00 0.00 39417.81 5451.40 39798.23 00:06:58.483 13:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:06:58.483 13:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:06:58.483 13:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:06:58.483 13:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:06:58.483 13:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:06:58.483 13:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:58.483 13:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:06:58.483 13:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:58.483 13:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:06:58.483 13:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:58.483 13:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:58.483 rmmod nvme_tcp 00:06:58.483 rmmod nvme_fabrics 00:06:58.483 rmmod nvme_keyring 00:06:58.483 13:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:58.483 13:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:06:58.483 13:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:06:58.483 13:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 62359 ']' 00:06:58.483 13:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 62359 00:06:58.483 13:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 62359 ']' 00:06:58.483 13:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 62359 00:06:58.483 13:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:06:58.483 13:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:58.483 13:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62359 00:06:58.483 killing process with pid 62359 00:06:58.483 13:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:58.483 13:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:58.483 13:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62359' 00:06:58.483 13:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 62359 00:06:58.483 13:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 62359 00:06:58.742 [2024-12-06 13:45:57.939771] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:06:58.742 13:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:58.742 13:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:58.742 13:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:58.742 13:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:06:58.742 13:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:58.742 13:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:06:58.742 13:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:06:58.742 13:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:58.742 13:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:06:58.742 13:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:06:58.742 13:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:06:58.742 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:06:58.742 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:06:58.742 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:06:58.742 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:06:58.742 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:06:58.742 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:06:58.742 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:06:58.742 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:06:58.742 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:06:59.003 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:59.003 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:59.003 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:06:59.003 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:59.003 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:59.003 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:59.003 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:06:59.003 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:06:59.003 00:06:59.003 real 0m5.738s 00:06:59.003 user 0m20.279s 00:06:59.003 sys 0m1.662s 00:06:59.003 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:59.003 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:59.003 ************************************ 00:06:59.003 END TEST nvmf_host_management 00:06:59.003 ************************************ 00:06:59.003 13:45:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:59.003 13:45:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:59.003 13:45:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:59.003 13:45:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:59.003 ************************************ 00:06:59.003 START TEST nvmf_lvol 00:06:59.003 ************************************ 00:06:59.003 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:59.003 * Looking for test storage... 00:06:59.003 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:59.003 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:59.003 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:06:59.003 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:59.264 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:59.264 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:59.264 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:59.264 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:59.264 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:06:59.264 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:06:59.264 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:06:59.264 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:06:59.264 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:06:59.264 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:06:59.264 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:06:59.264 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:59.264 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:06:59.264 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:06:59.264 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:59.264 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:59.264 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:06:59.264 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:06:59.264 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:59.264 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:06:59.264 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:06:59.264 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:06:59.264 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:06:59.264 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:59.264 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:06:59.264 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:06:59.264 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:59.264 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:59.264 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:06:59.264 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:59.264 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:59.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.264 --rc genhtml_branch_coverage=1 00:06:59.264 --rc genhtml_function_coverage=1 00:06:59.264 --rc genhtml_legend=1 00:06:59.264 --rc geninfo_all_blocks=1 00:06:59.264 --rc geninfo_unexecuted_blocks=1 00:06:59.264 00:06:59.264 ' 00:06:59.264 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:59.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.264 --rc genhtml_branch_coverage=1 00:06:59.264 --rc genhtml_function_coverage=1 00:06:59.264 --rc genhtml_legend=1 00:06:59.264 --rc geninfo_all_blocks=1 00:06:59.264 --rc geninfo_unexecuted_blocks=1 00:06:59.264 00:06:59.264 ' 00:06:59.264 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:59.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.264 --rc genhtml_branch_coverage=1 00:06:59.264 --rc genhtml_function_coverage=1 00:06:59.264 --rc genhtml_legend=1 00:06:59.264 --rc geninfo_all_blocks=1 00:06:59.264 --rc geninfo_unexecuted_blocks=1 00:06:59.264 00:06:59.264 ' 00:06:59.264 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:59.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.264 --rc genhtml_branch_coverage=1 00:06:59.264 --rc genhtml_function_coverage=1 00:06:59.264 --rc genhtml_legend=1 00:06:59.264 --rc geninfo_all_blocks=1 00:06:59.264 --rc geninfo_unexecuted_blocks=1 00:06:59.264 00:06:59.264 ' 00:06:59.264 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:59.264 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:06:59.264 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:59.264 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:59.264 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:59.264 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:59.264 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:59.264 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:59.264 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:59.264 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:59.264 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:59.264 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:59.264 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:06:59.264 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=cfa2def7-c8af-457f-82a0-b312efdea7f4 00:06:59.264 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:59.264 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:59.264 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:59.264 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:59.264 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:59.264 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:06:59.264 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:59.264 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:59.264 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:59.264 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.264 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.264 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.265 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:06:59.265 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.265 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:06:59.265 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:59.265 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:59.265 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:59.265 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:59.265 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:59.265 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:59.265 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:59.265 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:59.265 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:59.265 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:59.265 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:59.265 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:59.265 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:06:59.265 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:06:59.265 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:59.265 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:06:59.265 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:59.265 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:59.265 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:59.265 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:59.265 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:59.265 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:59.265 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:59.265 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:59.265 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:06:59.265 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:06:59.265 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:06:59.265 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:06:59.265 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:06:59.265 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@460 -- # nvmf_veth_init 00:06:59.265 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:59.265 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:06:59.265 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:06:59.265 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:06:59.265 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:59.265 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:06:59.265 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:59.265 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:06:59.265 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:59.265 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:06:59.265 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:59.265 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:59.265 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:59.265 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:59.265 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:59.265 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:59.265 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:06:59.265 Cannot find device "nvmf_init_br" 00:06:59.265 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:06:59.265 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:06:59.265 Cannot find device "nvmf_init_br2" 00:06:59.265 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:06:59.265 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:06:59.265 Cannot find device "nvmf_tgt_br" 00:06:59.265 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:06:59.265 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:06:59.265 Cannot find device "nvmf_tgt_br2" 00:06:59.265 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:06:59.265 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:06:59.265 Cannot find device "nvmf_init_br" 00:06:59.265 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:06:59.265 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:06:59.265 Cannot find device "nvmf_init_br2" 00:06:59.265 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:06:59.265 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:06:59.265 Cannot find device "nvmf_tgt_br" 00:06:59.265 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:06:59.265 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:06:59.265 Cannot find device "nvmf_tgt_br2" 00:06:59.265 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:06:59.265 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:06:59.265 Cannot find device "nvmf_br" 00:06:59.265 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:06:59.265 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:06:59.265 Cannot find device "nvmf_init_if" 00:06:59.265 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:06:59.265 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:06:59.265 Cannot find device "nvmf_init_if2" 00:06:59.265 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:06:59.265 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:59.265 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:59.265 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:06:59.265 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:59.265 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:59.265 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:06:59.265 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:06:59.265 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:59.265 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:06:59.524 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:59.524 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:59.524 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:59.524 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:59.524 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:59.524 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:06:59.524 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:06:59.524 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:06:59.524 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:06:59.524 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:06:59.524 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:06:59.524 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:06:59.524 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:06:59.524 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:06:59.524 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:59.524 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:59.524 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:59.524 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:06:59.524 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:06:59.524 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:06:59.524 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:06:59.524 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:59.524 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:59.524 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:59.524 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:06:59.524 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:06:59.524 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:06:59.524 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:59.524 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:06:59.524 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:06:59.524 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:59.524 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.079 ms 00:06:59.524 00:06:59.524 --- 10.0.0.3 ping statistics --- 00:06:59.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:59.525 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:06:59.525 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:06:59.525 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:06:59.525 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.062 ms 00:06:59.525 00:06:59.525 --- 10.0.0.4 ping statistics --- 00:06:59.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:59.525 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:06:59.525 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:59.525 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:59.525 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:06:59.525 00:06:59.525 --- 10.0.0.1 ping statistics --- 00:06:59.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:59.525 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:06:59.525 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:06:59.525 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:59.525 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:06:59.525 00:06:59.525 --- 10.0.0.2 ping statistics --- 00:06:59.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:59.525 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:06:59.783 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:59.783 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@461 -- # return 0 00:06:59.783 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:59.783 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:59.783 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:59.783 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:59.783 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:59.783 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:59.783 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:59.783 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:06:59.783 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:59.783 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:59.783 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:59.783 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=62717 00:06:59.783 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 62717 00:06:59.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.783 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 62717 ']' 00:06:59.783 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.783 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:06:59.783 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:59.783 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.783 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:59.783 13:45:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:59.783 [2024-12-06 13:45:59.022885] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:06:59.783 [2024-12-06 13:45:59.022972] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:59.783 [2024-12-06 13:45:59.177316] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:00.042 [2024-12-06 13:45:59.239794] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:00.042 [2024-12-06 13:45:59.240150] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:00.042 [2024-12-06 13:45:59.240342] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:00.042 [2024-12-06 13:45:59.240496] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:00.042 [2024-12-06 13:45:59.240538] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:00.042 [2024-12-06 13:45:59.242170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:00.042 [2024-12-06 13:45:59.242302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:00.042 [2024-12-06 13:45:59.242313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.042 [2024-12-06 13:45:59.318284] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:00.977 13:46:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:00.977 13:46:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:07:00.977 13:46:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:00.977 13:46:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:00.977 13:46:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:00.977 13:46:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:00.977 13:46:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:01.236 [2024-12-06 13:46:00.403365] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:01.236 13:46:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:01.495 13:46:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:01.495 13:46:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:01.754 13:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:01.754 13:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:02.322 13:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:02.579 13:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=fefacf0e-4b0f-455b-ad73-934aab740dcd 00:07:02.579 13:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u fefacf0e-4b0f-455b-ad73-934aab740dcd lvol 20 00:07:02.837 13:46:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=1225665a-e06b-4452-ab09-41120b4666e5 00:07:02.837 13:46:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:02.838 13:46:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1225665a-e06b-4452-ab09-41120b4666e5 00:07:03.096 13:46:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:07:03.354 [2024-12-06 13:46:02.717657] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:03.354 13:46:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:03.922 13:46:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=62797 00:07:03.922 13:46:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:03.922 13:46:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:04.859 13:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 1225665a-e06b-4452-ab09-41120b4666e5 MY_SNAPSHOT 00:07:05.119 13:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=f8cf3255-6331-47cf-9b48-2f2b2db99b90 00:07:05.119 13:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 1225665a-e06b-4452-ab09-41120b4666e5 30 00:07:05.379 13:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone f8cf3255-6331-47cf-9b48-2f2b2db99b90 MY_CLONE 00:07:05.638 13:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=59d0ca7f-2bed-4a42-857f-1d612fff9aee 00:07:05.638 13:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 59d0ca7f-2bed-4a42-857f-1d612fff9aee 00:07:06.205 13:46:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 62797 00:07:14.351 Initializing NVMe Controllers 00:07:14.351 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:07:14.351 Controller IO queue size 128, less than required. 00:07:14.351 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:14.351 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:14.351 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:14.351 Initialization complete. Launching workers. 00:07:14.351 ======================================================== 00:07:14.351 Latency(us) 00:07:14.351 Device Information : IOPS MiB/s Average min max 00:07:14.351 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 6995.50 27.33 18322.71 517.04 75374.94 00:07:14.351 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 7706.70 30.10 16630.02 4373.64 72274.67 00:07:14.351 ======================================================== 00:07:14.351 Total : 14702.20 57.43 17435.42 517.04 75374.94 00:07:14.351 00:07:14.351 13:46:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:14.351 13:46:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 1225665a-e06b-4452-ab09-41120b4666e5 00:07:14.610 13:46:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fefacf0e-4b0f-455b-ad73-934aab740dcd 00:07:14.868 13:46:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:14.868 13:46:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:14.868 13:46:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:14.868 13:46:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:14.868 13:46:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:07:15.127 13:46:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:15.127 13:46:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:07:15.127 13:46:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:15.127 13:46:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:15.127 rmmod nvme_tcp 00:07:15.127 rmmod nvme_fabrics 00:07:15.127 rmmod nvme_keyring 00:07:15.127 13:46:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:15.127 13:46:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:07:15.127 13:46:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:07:15.127 13:46:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 62717 ']' 00:07:15.127 13:46:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 62717 00:07:15.127 13:46:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 62717 ']' 00:07:15.127 13:46:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 62717 00:07:15.127 13:46:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:07:15.127 13:46:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:15.127 13:46:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62717 00:07:15.127 killing process with pid 62717 00:07:15.127 13:46:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:15.127 13:46:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:15.127 13:46:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62717' 00:07:15.127 13:46:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 62717 00:07:15.127 13:46:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 62717 00:07:15.386 13:46:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:15.386 13:46:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:15.386 13:46:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:15.386 13:46:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:07:15.386 13:46:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:07:15.386 13:46:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:07:15.386 13:46:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:15.386 13:46:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:15.386 13:46:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:07:15.386 13:46:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:07:15.386 13:46:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:07:15.386 13:46:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:07:15.386 13:46:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:07:15.386 13:46:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:07:15.386 13:46:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:07:15.386 13:46:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:07:15.386 13:46:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:07:15.386 13:46:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:07:15.646 13:46:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:07:15.646 13:46:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:07:15.646 13:46:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:15.646 13:46:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:15.646 13:46:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:07:15.646 13:46:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:15.646 13:46:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:15.646 13:46:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:15.646 13:46:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:07:15.646 00:07:15.646 real 0m16.656s 00:07:15.646 user 1m8.390s 00:07:15.646 sys 0m3.727s 00:07:15.646 ************************************ 00:07:15.646 END TEST nvmf_lvol 00:07:15.646 ************************************ 00:07:15.646 13:46:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:15.646 13:46:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:15.646 13:46:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:15.646 13:46:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:15.646 13:46:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:15.646 13:46:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:15.646 ************************************ 00:07:15.646 START TEST nvmf_lvs_grow 00:07:15.646 ************************************ 00:07:15.646 13:46:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:15.905 * Looking for test storage... 00:07:15.905 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:15.905 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:15.905 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:07:15.905 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:15.905 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:15.905 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:15.905 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:15.905 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:15.905 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:15.905 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:15.905 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:15.905 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:15.905 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:15.905 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:15.905 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:15.905 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:15.905 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:15.905 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:15.905 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:15.905 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:15.905 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:15.905 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:15.905 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:15.905 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:15.905 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:15.905 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:15.905 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:15.905 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:15.905 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:15.905 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:15.905 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:15.905 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:15.905 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:15.905 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:15.905 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:15.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.905 --rc genhtml_branch_coverage=1 00:07:15.905 --rc genhtml_function_coverage=1 00:07:15.905 --rc genhtml_legend=1 00:07:15.905 --rc geninfo_all_blocks=1 00:07:15.905 --rc geninfo_unexecuted_blocks=1 00:07:15.905 00:07:15.905 ' 00:07:15.905 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:15.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.905 --rc genhtml_branch_coverage=1 00:07:15.905 --rc genhtml_function_coverage=1 00:07:15.905 --rc genhtml_legend=1 00:07:15.905 --rc geninfo_all_blocks=1 00:07:15.905 --rc geninfo_unexecuted_blocks=1 00:07:15.905 00:07:15.905 ' 00:07:15.905 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:15.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.905 --rc genhtml_branch_coverage=1 00:07:15.905 --rc genhtml_function_coverage=1 00:07:15.905 --rc genhtml_legend=1 00:07:15.905 --rc geninfo_all_blocks=1 00:07:15.905 --rc geninfo_unexecuted_blocks=1 00:07:15.905 00:07:15.905 ' 00:07:15.905 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:15.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.905 --rc genhtml_branch_coverage=1 00:07:15.905 --rc genhtml_function_coverage=1 00:07:15.905 --rc genhtml_legend=1 00:07:15.905 --rc geninfo_all_blocks=1 00:07:15.905 --rc geninfo_unexecuted_blocks=1 00:07:15.905 00:07:15.905 ' 00:07:15.905 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:15.905 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:15.905 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:15.905 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:15.905 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:15.905 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:15.905 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:15.905 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:15.905 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:15.905 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:15.905 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:15.905 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:15.905 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:07:15.905 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=cfa2def7-c8af-457f-82a0-b312efdea7f4 00:07:15.905 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:15.905 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:15.905 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:15.905 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:15.906 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:15.906 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:15.906 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:15.906 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:15.906 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:15.906 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.906 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.906 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.906 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:15.906 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.906 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:15.906 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:15.906 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:15.906 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:15.906 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:15.906 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:15.906 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:15.906 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:15.906 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:15.906 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:15.906 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:15.906 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:15.906 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:15.906 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:15.906 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:15.906 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:15.906 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:15.906 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:15.906 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:15.906 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:15.906 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:15.906 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:15.906 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:07:15.906 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:07:15.906 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:07:15.906 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:07:15.906 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:07:15.906 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@460 -- # nvmf_veth_init 00:07:15.906 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:15.906 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:15.906 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:15.906 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:15.906 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:15.906 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:15.906 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:15.906 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:15.906 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:15.906 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:15.906 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:15.906 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:15.906 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:15.906 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:15.906 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:15.906 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:15.906 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:15.906 Cannot find device "nvmf_init_br" 00:07:15.906 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:07:15.906 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:15.906 Cannot find device "nvmf_init_br2" 00:07:15.906 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:07:15.906 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:15.906 Cannot find device "nvmf_tgt_br" 00:07:15.906 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:07:15.906 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:15.906 Cannot find device "nvmf_tgt_br2" 00:07:15.906 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:07:15.906 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:15.906 Cannot find device "nvmf_init_br" 00:07:15.906 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:07:15.906 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:16.164 Cannot find device "nvmf_init_br2" 00:07:16.164 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:07:16.164 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:16.164 Cannot find device "nvmf_tgt_br" 00:07:16.164 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:07:16.164 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:16.164 Cannot find device "nvmf_tgt_br2" 00:07:16.164 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:07:16.164 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:16.164 Cannot find device "nvmf_br" 00:07:16.164 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:07:16.164 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:16.164 Cannot find device "nvmf_init_if" 00:07:16.164 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:07:16.164 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:16.164 Cannot find device "nvmf_init_if2" 00:07:16.164 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:07:16.164 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:16.164 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:16.164 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:07:16.164 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:16.164 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:16.164 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:07:16.164 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:16.164 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:16.164 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:16.164 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:16.164 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:16.164 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:16.164 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:16.164 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:16.164 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:16.164 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:16.164 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:16.164 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:16.164 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:16.164 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:16.164 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:16.164 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:16.164 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:16.164 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:16.164 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:16.164 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:16.164 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:16.164 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:16.164 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:16.164 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:16.164 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:16.164 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:16.164 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:16.164 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:16.164 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:16.164 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:16.423 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:16.423 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:16.423 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:16.423 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:16.423 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:07:16.423 00:07:16.423 --- 10.0.0.3 ping statistics --- 00:07:16.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:16.423 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:07:16.423 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:16.423 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:16.423 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.060 ms 00:07:16.423 00:07:16.423 --- 10.0.0.4 ping statistics --- 00:07:16.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:16.423 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:07:16.423 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:16.423 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:16.423 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:07:16.423 00:07:16.423 --- 10.0.0.1 ping statistics --- 00:07:16.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:16.423 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:07:16.423 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:16.423 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:16.423 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:07:16.423 00:07:16.423 --- 10.0.0.2 ping statistics --- 00:07:16.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:16.423 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:07:16.423 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:16.423 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@461 -- # return 0 00:07:16.423 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:16.423 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:16.423 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:16.423 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:16.423 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:16.423 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:16.423 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:16.423 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:16.423 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:16.423 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:16.423 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:16.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:16.423 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=63184 00:07:16.423 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:16.423 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 63184 00:07:16.423 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 63184 ']' 00:07:16.423 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:16.423 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:16.423 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:16.423 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:16.423 13:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:16.423 [2024-12-06 13:46:15.672454] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:07:16.424 [2024-12-06 13:46:15.672657] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:16.424 [2024-12-06 13:46:15.813572] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.683 [2024-12-06 13:46:15.862308] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:16.683 [2024-12-06 13:46:15.862590] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:16.683 [2024-12-06 13:46:15.862759] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:16.683 [2024-12-06 13:46:15.862933] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:16.683 [2024-12-06 13:46:15.862968] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:16.683 [2024-12-06 13:46:15.863451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.683 [2024-12-06 13:46:15.930280] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:17.251 13:46:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:17.251 13:46:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:07:17.251 13:46:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:17.251 13:46:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:17.252 13:46:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:17.511 13:46:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:17.511 13:46:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:17.511 [2024-12-06 13:46:16.887068] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:17.511 13:46:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:17.511 13:46:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:17.511 13:46:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:17.511 13:46:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:17.770 ************************************ 00:07:17.770 START TEST lvs_grow_clean 00:07:17.770 ************************************ 00:07:17.770 13:46:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:07:17.770 13:46:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:17.770 13:46:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:17.770 13:46:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:17.770 13:46:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:17.770 13:46:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:17.770 13:46:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:17.770 13:46:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:17.770 13:46:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:17.770 13:46:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:18.029 13:46:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:18.029 13:46:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:18.288 13:46:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=6876abda-85cf-47e7-bf67-a4f9c0f5d970 00:07:18.288 13:46:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6876abda-85cf-47e7-bf67-a4f9c0f5d970 00:07:18.288 13:46:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:18.547 13:46:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:18.547 13:46:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:18.547 13:46:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 6876abda-85cf-47e7-bf67-a4f9c0f5d970 lvol 150 00:07:18.807 13:46:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=9909235b-f51e-44ff-b756-bf0fa0de6303 00:07:18.807 13:46:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:18.807 13:46:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:18.807 [2024-12-06 13:46:18.153676] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:18.807 [2024-12-06 13:46:18.153730] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:18.807 true 00:07:18.807 13:46:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6876abda-85cf-47e7-bf67-a4f9c0f5d970 00:07:18.807 13:46:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:19.066 13:46:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:19.066 13:46:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:19.325 13:46:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 9909235b-f51e-44ff-b756-bf0fa0de6303 00:07:19.584 13:46:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:07:19.843 [2024-12-06 13:46:19.066086] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:19.844 13:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:20.103 13:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:20.103 13:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=63266 00:07:20.103 13:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:20.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:20.103 13:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 63266 /var/tmp/bdevperf.sock 00:07:20.103 13:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 63266 ']' 00:07:20.103 13:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:20.103 13:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:20.103 13:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:20.103 13:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:20.103 13:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:20.103 [2024-12-06 13:46:19.347137] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:07:20.103 [2024-12-06 13:46:19.347407] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63266 ] 00:07:20.103 [2024-12-06 13:46:19.484970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.362 [2024-12-06 13:46:19.544585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:20.362 [2024-12-06 13:46:19.601076] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:20.362 13:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:20.362 13:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:07:20.362 13:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:20.621 Nvme0n1 00:07:20.621 13:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:20.881 [ 00:07:20.881 { 00:07:20.881 "name": "Nvme0n1", 00:07:20.881 "aliases": [ 00:07:20.881 "9909235b-f51e-44ff-b756-bf0fa0de6303" 00:07:20.881 ], 00:07:20.881 "product_name": "NVMe disk", 00:07:20.881 "block_size": 4096, 00:07:20.881 "num_blocks": 38912, 00:07:20.881 "uuid": "9909235b-f51e-44ff-b756-bf0fa0de6303", 00:07:20.881 "numa_id": -1, 00:07:20.881 "assigned_rate_limits": { 00:07:20.881 "rw_ios_per_sec": 0, 00:07:20.881 "rw_mbytes_per_sec": 0, 00:07:20.881 "r_mbytes_per_sec": 0, 00:07:20.881 "w_mbytes_per_sec": 0 00:07:20.881 }, 00:07:20.881 "claimed": false, 00:07:20.881 "zoned": false, 00:07:20.881 "supported_io_types": { 00:07:20.881 "read": true, 00:07:20.881 "write": true, 00:07:20.881 "unmap": true, 00:07:20.881 "flush": true, 00:07:20.881 "reset": true, 00:07:20.881 "nvme_admin": true, 00:07:20.881 "nvme_io": true, 00:07:20.881 "nvme_io_md": false, 00:07:20.881 "write_zeroes": true, 00:07:20.881 "zcopy": false, 00:07:20.881 "get_zone_info": false, 00:07:20.881 "zone_management": false, 00:07:20.881 "zone_append": false, 00:07:20.881 "compare": true, 00:07:20.881 "compare_and_write": true, 00:07:20.881 "abort": true, 00:07:20.881 "seek_hole": false, 00:07:20.881 "seek_data": false, 00:07:20.881 "copy": true, 00:07:20.881 "nvme_iov_md": false 00:07:20.881 }, 00:07:20.881 "memory_domains": [ 00:07:20.881 { 00:07:20.881 "dma_device_id": "system", 00:07:20.881 "dma_device_type": 1 00:07:20.881 } 00:07:20.881 ], 00:07:20.881 "driver_specific": { 00:07:20.881 "nvme": [ 00:07:20.881 { 00:07:20.881 "trid": { 00:07:20.881 "trtype": "TCP", 00:07:20.881 "adrfam": "IPv4", 00:07:20.881 "traddr": "10.0.0.3", 00:07:20.881 "trsvcid": "4420", 00:07:20.881 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:20.881 }, 00:07:20.881 "ctrlr_data": { 00:07:20.881 "cntlid": 1, 00:07:20.881 "vendor_id": "0x8086", 00:07:20.881 "model_number": "SPDK bdev Controller", 00:07:20.881 "serial_number": "SPDK0", 00:07:20.881 "firmware_revision": "25.01", 00:07:20.881 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:20.881 "oacs": { 00:07:20.881 "security": 0, 00:07:20.881 "format": 0, 00:07:20.881 "firmware": 0, 00:07:20.881 "ns_manage": 0 00:07:20.881 }, 00:07:20.881 "multi_ctrlr": true, 00:07:20.881 "ana_reporting": false 00:07:20.881 }, 00:07:20.881 "vs": { 00:07:20.881 "nvme_version": "1.3" 00:07:20.881 }, 00:07:20.881 "ns_data": { 00:07:20.881 "id": 1, 00:07:20.881 "can_share": true 00:07:20.881 } 00:07:20.881 } 00:07:20.881 ], 00:07:20.881 "mp_policy": "active_passive" 00:07:20.881 } 00:07:20.881 } 00:07:20.881 ] 00:07:20.881 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:20.881 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=63282 00:07:20.882 13:46:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:21.141 Running I/O for 10 seconds... 00:07:22.079 Latency(us) 00:07:22.079 [2024-12-06T13:46:21.483Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:22.079 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:22.079 Nvme0n1 : 1.00 8653.00 33.80 0.00 0.00 0.00 0.00 0.00 00:07:22.079 [2024-12-06T13:46:21.483Z] =================================================================================================================== 00:07:22.079 [2024-12-06T13:46:21.483Z] Total : 8653.00 33.80 0.00 0.00 0.00 0.00 0.00 00:07:22.079 00:07:23.016 13:46:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 6876abda-85cf-47e7-bf67-a4f9c0f5d970 00:07:23.016 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:23.016 Nvme0n1 : 2.00 8835.00 34.51 0.00 0.00 0.00 0.00 0.00 00:07:23.016 [2024-12-06T13:46:22.420Z] =================================================================================================================== 00:07:23.016 [2024-12-06T13:46:22.420Z] Total : 8835.00 34.51 0.00 0.00 0.00 0.00 0.00 00:07:23.016 00:07:23.276 true 00:07:23.276 13:46:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6876abda-85cf-47e7-bf67-a4f9c0f5d970 00:07:23.276 13:46:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:23.535 13:46:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:23.535 13:46:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:23.535 13:46:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 63282 00:07:24.102 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:24.102 Nvme0n1 : 3.00 8768.67 34.25 0.00 0.00 0.00 0.00 0.00 00:07:24.102 [2024-12-06T13:46:23.506Z] =================================================================================================================== 00:07:24.102 [2024-12-06T13:46:23.506Z] Total : 8768.67 34.25 0.00 0.00 0.00 0.00 0.00 00:07:24.102 00:07:25.036 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:25.036 Nvme0n1 : 4.00 8610.75 33.64 0.00 0.00 0.00 0.00 0.00 00:07:25.036 [2024-12-06T13:46:24.440Z] =================================================================================================================== 00:07:25.036 [2024-12-06T13:46:24.440Z] Total : 8610.75 33.64 0.00 0.00 0.00 0.00 0.00 00:07:25.036 00:07:25.972 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:25.972 Nvme0n1 : 5.00 8615.80 33.66 0.00 0.00 0.00 0.00 0.00 00:07:25.972 [2024-12-06T13:46:25.376Z] =================================================================================================================== 00:07:25.972 [2024-12-06T13:46:25.376Z] Total : 8615.80 33.66 0.00 0.00 0.00 0.00 0.00 00:07:25.972 00:07:26.950 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:26.950 Nvme0n1 : 6.00 8598.00 33.59 0.00 0.00 0.00 0.00 0.00 00:07:26.950 [2024-12-06T13:46:26.354Z] =================================================================================================================== 00:07:26.950 [2024-12-06T13:46:26.354Z] Total : 8598.00 33.59 0.00 0.00 0.00 0.00 0.00 00:07:26.950 00:07:28.328 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:28.328 Nvme0n1 : 7.00 8603.43 33.61 0.00 0.00 0.00 0.00 0.00 00:07:28.328 [2024-12-06T13:46:27.732Z] =================================================================================================================== 00:07:28.328 [2024-12-06T13:46:27.732Z] Total : 8603.43 33.61 0.00 0.00 0.00 0.00 0.00 00:07:28.328 00:07:28.897 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:28.897 Nvme0n1 : 8.00 8591.62 33.56 0.00 0.00 0.00 0.00 0.00 00:07:28.897 [2024-12-06T13:46:28.301Z] =================================================================================================================== 00:07:28.897 [2024-12-06T13:46:28.301Z] Total : 8591.62 33.56 0.00 0.00 0.00 0.00 0.00 00:07:28.897 00:07:30.275 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:30.275 Nvme0n1 : 9.00 8610.67 33.64 0.00 0.00 0.00 0.00 0.00 00:07:30.275 [2024-12-06T13:46:29.679Z] =================================================================================================================== 00:07:30.275 [2024-12-06T13:46:29.679Z] Total : 8610.67 33.64 0.00 0.00 0.00 0.00 0.00 00:07:30.275 00:07:31.212 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:31.212 Nvme0n1 : 10.00 8625.90 33.69 0.00 0.00 0.00 0.00 0.00 00:07:31.212 [2024-12-06T13:46:30.616Z] =================================================================================================================== 00:07:31.212 [2024-12-06T13:46:30.616Z] Total : 8625.90 33.69 0.00 0.00 0.00 0.00 0.00 00:07:31.212 00:07:31.212 00:07:31.212 Latency(us) 00:07:31.212 [2024-12-06T13:46:30.616Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:31.212 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:31.212 Nvme0n1 : 10.00 8636.39 33.74 0.00 0.00 14816.73 4885.41 104857.60 00:07:31.212 [2024-12-06T13:46:30.616Z] =================================================================================================================== 00:07:31.212 [2024-12-06T13:46:30.616Z] Total : 8636.39 33.74 0.00 0.00 14816.73 4885.41 104857.60 00:07:31.212 { 00:07:31.212 "results": [ 00:07:31.212 { 00:07:31.212 "job": "Nvme0n1", 00:07:31.212 "core_mask": "0x2", 00:07:31.212 "workload": "randwrite", 00:07:31.212 "status": "finished", 00:07:31.212 "queue_depth": 128, 00:07:31.212 "io_size": 4096, 00:07:31.212 "runtime": 10.002672, 00:07:31.212 "iops": 8636.392355962487, 00:07:31.212 "mibps": 33.735907640478466, 00:07:31.212 "io_failed": 0, 00:07:31.212 "io_timeout": 0, 00:07:31.212 "avg_latency_us": 14816.732846966665, 00:07:31.212 "min_latency_us": 4885.410909090909, 00:07:31.212 "max_latency_us": 104857.6 00:07:31.212 } 00:07:31.212 ], 00:07:31.212 "core_count": 1 00:07:31.212 } 00:07:31.212 13:46:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 63266 00:07:31.212 13:46:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 63266 ']' 00:07:31.212 13:46:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 63266 00:07:31.212 13:46:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:07:31.212 13:46:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:31.212 13:46:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63266 00:07:31.212 killing process with pid 63266 00:07:31.212 Received shutdown signal, test time was about 10.000000 seconds 00:07:31.212 00:07:31.212 Latency(us) 00:07:31.212 [2024-12-06T13:46:30.616Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:31.212 [2024-12-06T13:46:30.616Z] =================================================================================================================== 00:07:31.212 [2024-12-06T13:46:30.616Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:31.212 13:46:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:31.213 13:46:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:31.213 13:46:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63266' 00:07:31.213 13:46:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 63266 00:07:31.213 13:46:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 63266 00:07:31.213 13:46:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:31.472 13:46:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:31.730 13:46:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6876abda-85cf-47e7-bf67-a4f9c0f5d970 00:07:31.730 13:46:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:32.298 13:46:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:32.298 13:46:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:32.298 13:46:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:32.298 [2024-12-06 13:46:31.608569] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:32.298 13:46:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6876abda-85cf-47e7-bf67-a4f9c0f5d970 00:07:32.298 13:46:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:07:32.298 13:46:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6876abda-85cf-47e7-bf67-a4f9c0f5d970 00:07:32.298 13:46:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:32.298 13:46:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:32.298 13:46:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:32.298 13:46:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:32.298 13:46:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:32.298 13:46:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:32.298 13:46:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:32.298 13:46:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:32.298 13:46:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6876abda-85cf-47e7-bf67-a4f9c0f5d970 00:07:32.557 request: 00:07:32.557 { 00:07:32.557 "uuid": "6876abda-85cf-47e7-bf67-a4f9c0f5d970", 00:07:32.557 "method": "bdev_lvol_get_lvstores", 00:07:32.557 "req_id": 1 00:07:32.557 } 00:07:32.557 Got JSON-RPC error response 00:07:32.557 response: 00:07:32.557 { 00:07:32.557 "code": -19, 00:07:32.557 "message": "No such device" 00:07:32.557 } 00:07:32.557 13:46:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:07:32.557 13:46:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:32.557 13:46:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:32.557 13:46:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:32.557 13:46:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:32.816 aio_bdev 00:07:32.816 13:46:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 9909235b-f51e-44ff-b756-bf0fa0de6303 00:07:32.816 13:46:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=9909235b-f51e-44ff-b756-bf0fa0de6303 00:07:32.816 13:46:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:32.816 13:46:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:07:32.816 13:46:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:32.816 13:46:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:32.816 13:46:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:33.076 13:46:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9909235b-f51e-44ff-b756-bf0fa0de6303 -t 2000 00:07:33.335 [ 00:07:33.335 { 00:07:33.335 "name": "9909235b-f51e-44ff-b756-bf0fa0de6303", 00:07:33.335 "aliases": [ 00:07:33.335 "lvs/lvol" 00:07:33.335 ], 00:07:33.335 "product_name": "Logical Volume", 00:07:33.335 "block_size": 4096, 00:07:33.335 "num_blocks": 38912, 00:07:33.335 "uuid": "9909235b-f51e-44ff-b756-bf0fa0de6303", 00:07:33.335 "assigned_rate_limits": { 00:07:33.335 "rw_ios_per_sec": 0, 00:07:33.335 "rw_mbytes_per_sec": 0, 00:07:33.335 "r_mbytes_per_sec": 0, 00:07:33.335 "w_mbytes_per_sec": 0 00:07:33.335 }, 00:07:33.335 "claimed": false, 00:07:33.336 "zoned": false, 00:07:33.336 "supported_io_types": { 00:07:33.336 "read": true, 00:07:33.336 "write": true, 00:07:33.336 "unmap": true, 00:07:33.336 "flush": false, 00:07:33.336 "reset": true, 00:07:33.336 "nvme_admin": false, 00:07:33.336 "nvme_io": false, 00:07:33.336 "nvme_io_md": false, 00:07:33.336 "write_zeroes": true, 00:07:33.336 "zcopy": false, 00:07:33.336 "get_zone_info": false, 00:07:33.336 "zone_management": false, 00:07:33.336 "zone_append": false, 00:07:33.336 "compare": false, 00:07:33.336 "compare_and_write": false, 00:07:33.336 "abort": false, 00:07:33.336 "seek_hole": true, 00:07:33.336 "seek_data": true, 00:07:33.336 "copy": false, 00:07:33.336 "nvme_iov_md": false 00:07:33.336 }, 00:07:33.336 "driver_specific": { 00:07:33.336 "lvol": { 00:07:33.336 "lvol_store_uuid": "6876abda-85cf-47e7-bf67-a4f9c0f5d970", 00:07:33.336 "base_bdev": "aio_bdev", 00:07:33.336 "thin_provision": false, 00:07:33.336 "num_allocated_clusters": 38, 00:07:33.336 "snapshot": false, 00:07:33.336 "clone": false, 00:07:33.336 "esnap_clone": false 00:07:33.336 } 00:07:33.336 } 00:07:33.336 } 00:07:33.336 ] 00:07:33.336 13:46:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:07:33.336 13:46:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6876abda-85cf-47e7-bf67-a4f9c0f5d970 00:07:33.336 13:46:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:33.595 13:46:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:33.595 13:46:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:33.595 13:46:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6876abda-85cf-47e7-bf67-a4f9c0f5d970 00:07:33.853 13:46:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:33.853 13:46:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 9909235b-f51e-44ff-b756-bf0fa0de6303 00:07:34.420 13:46:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6876abda-85cf-47e7-bf67-a4f9c0f5d970 00:07:34.420 13:46:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:34.987 13:46:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:35.246 ************************************ 00:07:35.246 END TEST lvs_grow_clean 00:07:35.246 ************************************ 00:07:35.246 00:07:35.246 real 0m17.622s 00:07:35.246 user 0m16.430s 00:07:35.246 sys 0m2.483s 00:07:35.246 13:46:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:35.246 13:46:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:35.246 13:46:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:35.246 13:46:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:35.246 13:46:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:35.246 13:46:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:35.246 ************************************ 00:07:35.246 START TEST lvs_grow_dirty 00:07:35.246 ************************************ 00:07:35.246 13:46:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:07:35.246 13:46:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:35.246 13:46:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:35.246 13:46:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:35.246 13:46:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:35.246 13:46:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:35.246 13:46:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:35.246 13:46:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:35.246 13:46:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:35.246 13:46:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:35.813 13:46:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:35.813 13:46:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:35.813 13:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=96306a0d-646f-4a0c-9de2-7e2b4c5785bc 00:07:36.070 13:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 96306a0d-646f-4a0c-9de2-7e2b4c5785bc 00:07:36.070 13:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:36.070 13:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:36.070 13:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:36.070 13:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 96306a0d-646f-4a0c-9de2-7e2b4c5785bc lvol 150 00:07:36.327 13:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=f7c347f9-f721-4f27-87c4-8828f1566340 00:07:36.327 13:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:36.327 13:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:36.585 [2024-12-06 13:46:35.902862] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:36.586 [2024-12-06 13:46:35.902926] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:36.586 true 00:07:36.586 13:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 96306a0d-646f-4a0c-9de2-7e2b4c5785bc 00:07:36.586 13:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:36.844 13:46:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:36.844 13:46:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:37.103 13:46:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f7c347f9-f721-4f27-87c4-8828f1566340 00:07:37.361 13:46:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:07:37.621 [2024-12-06 13:46:36.995406] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:37.621 13:46:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:38.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:38.189 13:46:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=63537 00:07:38.189 13:46:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:38.189 13:46:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:38.189 13:46:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 63537 /var/tmp/bdevperf.sock 00:07:38.189 13:46:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 63537 ']' 00:07:38.189 13:46:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:38.189 13:46:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:38.189 13:46:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:38.189 13:46:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:38.189 13:46:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:38.189 [2024-12-06 13:46:37.352685] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:07:38.189 [2024-12-06 13:46:37.353538] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63537 ] 00:07:38.189 [2024-12-06 13:46:37.490717] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.189 [2024-12-06 13:46:37.543295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:38.448 [2024-12-06 13:46:37.618013] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:39.017 13:46:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:39.017 13:46:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:39.017 13:46:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:39.276 Nvme0n1 00:07:39.276 13:46:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:39.536 [ 00:07:39.536 { 00:07:39.536 "name": "Nvme0n1", 00:07:39.536 "aliases": [ 00:07:39.536 "f7c347f9-f721-4f27-87c4-8828f1566340" 00:07:39.536 ], 00:07:39.536 "product_name": "NVMe disk", 00:07:39.536 "block_size": 4096, 00:07:39.536 "num_blocks": 38912, 00:07:39.536 "uuid": "f7c347f9-f721-4f27-87c4-8828f1566340", 00:07:39.536 "numa_id": -1, 00:07:39.536 "assigned_rate_limits": { 00:07:39.536 "rw_ios_per_sec": 0, 00:07:39.536 "rw_mbytes_per_sec": 0, 00:07:39.536 "r_mbytes_per_sec": 0, 00:07:39.536 "w_mbytes_per_sec": 0 00:07:39.536 }, 00:07:39.536 "claimed": false, 00:07:39.536 "zoned": false, 00:07:39.536 "supported_io_types": { 00:07:39.536 "read": true, 00:07:39.536 "write": true, 00:07:39.536 "unmap": true, 00:07:39.536 "flush": true, 00:07:39.536 "reset": true, 00:07:39.536 "nvme_admin": true, 00:07:39.536 "nvme_io": true, 00:07:39.536 "nvme_io_md": false, 00:07:39.536 "write_zeroes": true, 00:07:39.536 "zcopy": false, 00:07:39.536 "get_zone_info": false, 00:07:39.536 "zone_management": false, 00:07:39.536 "zone_append": false, 00:07:39.536 "compare": true, 00:07:39.536 "compare_and_write": true, 00:07:39.536 "abort": true, 00:07:39.536 "seek_hole": false, 00:07:39.536 "seek_data": false, 00:07:39.536 "copy": true, 00:07:39.536 "nvme_iov_md": false 00:07:39.536 }, 00:07:39.536 "memory_domains": [ 00:07:39.536 { 00:07:39.536 "dma_device_id": "system", 00:07:39.536 "dma_device_type": 1 00:07:39.536 } 00:07:39.536 ], 00:07:39.536 "driver_specific": { 00:07:39.536 "nvme": [ 00:07:39.536 { 00:07:39.536 "trid": { 00:07:39.536 "trtype": "TCP", 00:07:39.536 "adrfam": "IPv4", 00:07:39.536 "traddr": "10.0.0.3", 00:07:39.536 "trsvcid": "4420", 00:07:39.536 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:39.536 }, 00:07:39.536 "ctrlr_data": { 00:07:39.536 "cntlid": 1, 00:07:39.536 "vendor_id": "0x8086", 00:07:39.536 "model_number": "SPDK bdev Controller", 00:07:39.536 "serial_number": "SPDK0", 00:07:39.536 "firmware_revision": "25.01", 00:07:39.536 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:39.536 "oacs": { 00:07:39.536 "security": 0, 00:07:39.536 "format": 0, 00:07:39.536 "firmware": 0, 00:07:39.536 "ns_manage": 0 00:07:39.536 }, 00:07:39.536 "multi_ctrlr": true, 00:07:39.536 "ana_reporting": false 00:07:39.536 }, 00:07:39.536 "vs": { 00:07:39.536 "nvme_version": "1.3" 00:07:39.536 }, 00:07:39.536 "ns_data": { 00:07:39.536 "id": 1, 00:07:39.536 "can_share": true 00:07:39.536 } 00:07:39.536 } 00:07:39.536 ], 00:07:39.536 "mp_policy": "active_passive" 00:07:39.536 } 00:07:39.536 } 00:07:39.536 ] 00:07:39.536 13:46:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=63555 00:07:39.536 13:46:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:39.536 13:46:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:39.796 Running I/O for 10 seconds... 00:07:40.759 Latency(us) 00:07:40.759 [2024-12-06T13:46:40.163Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:40.759 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:40.759 Nvme0n1 : 1.00 9398.00 36.71 0.00 0.00 0.00 0.00 0.00 00:07:40.759 [2024-12-06T13:46:40.163Z] =================================================================================================================== 00:07:40.759 [2024-12-06T13:46:40.163Z] Total : 9398.00 36.71 0.00 0.00 0.00 0.00 0.00 00:07:40.759 00:07:41.695 13:46:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 96306a0d-646f-4a0c-9de2-7e2b4c5785bc 00:07:41.695 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:41.695 Nvme0n1 : 2.00 9334.50 36.46 0.00 0.00 0.00 0.00 0.00 00:07:41.695 [2024-12-06T13:46:41.099Z] =================================================================================================================== 00:07:41.695 [2024-12-06T13:46:41.099Z] Total : 9334.50 36.46 0.00 0.00 0.00 0.00 0.00 00:07:41.695 00:07:41.954 true 00:07:41.954 13:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 96306a0d-646f-4a0c-9de2-7e2b4c5785bc 00:07:41.954 13:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:42.219 13:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:42.219 13:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:42.219 13:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 63555 00:07:42.855 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:42.855 Nvme0n1 : 3.00 9186.33 35.88 0.00 0.00 0.00 0.00 0.00 00:07:42.855 [2024-12-06T13:46:42.259Z] =================================================================================================================== 00:07:42.855 [2024-12-06T13:46:42.259Z] Total : 9186.33 35.88 0.00 0.00 0.00 0.00 0.00 00:07:42.855 00:07:43.791 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:43.791 Nvme0n1 : 4.00 9144.00 35.72 0.00 0.00 0.00 0.00 0.00 00:07:43.791 [2024-12-06T13:46:43.195Z] =================================================================================================================== 00:07:43.791 [2024-12-06T13:46:43.195Z] Total : 9144.00 35.72 0.00 0.00 0.00 0.00 0.00 00:07:43.791 00:07:44.729 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:44.729 Nvme0n1 : 5.00 9118.60 35.62 0.00 0.00 0.00 0.00 0.00 00:07:44.729 [2024-12-06T13:46:44.133Z] =================================================================================================================== 00:07:44.729 [2024-12-06T13:46:44.133Z] Total : 9118.60 35.62 0.00 0.00 0.00 0.00 0.00 00:07:44.729 00:07:45.666 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:45.666 Nvme0n1 : 6.00 9059.33 35.39 0.00 0.00 0.00 0.00 0.00 00:07:45.666 [2024-12-06T13:46:45.070Z] =================================================================================================================== 00:07:45.666 [2024-12-06T13:46:45.070Z] Total : 9059.33 35.39 0.00 0.00 0.00 0.00 0.00 00:07:45.666 00:07:47.043 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:47.043 Nvme0n1 : 7.00 8705.57 34.01 0.00 0.00 0.00 0.00 0.00 00:07:47.043 [2024-12-06T13:46:46.447Z] =================================================================================================================== 00:07:47.043 [2024-12-06T13:46:46.447Z] Total : 8705.57 34.01 0.00 0.00 0.00 0.00 0.00 00:07:47.043 00:07:47.981 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:47.981 Nvme0n1 : 8.00 8653.62 33.80 0.00 0.00 0.00 0.00 0.00 00:07:47.981 [2024-12-06T13:46:47.385Z] =================================================================================================================== 00:07:47.982 [2024-12-06T13:46:47.386Z] Total : 8653.62 33.80 0.00 0.00 0.00 0.00 0.00 00:07:47.982 00:07:48.920 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:48.920 Nvme0n1 : 9.00 8623.44 33.69 0.00 0.00 0.00 0.00 0.00 00:07:48.920 [2024-12-06T13:46:48.324Z] =================================================================================================================== 00:07:48.920 [2024-12-06T13:46:48.324Z] Total : 8623.44 33.69 0.00 0.00 0.00 0.00 0.00 00:07:48.920 00:07:49.858 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:49.858 Nvme0n1 : 10.00 8612.00 33.64 0.00 0.00 0.00 0.00 0.00 00:07:49.858 [2024-12-06T13:46:49.262Z] =================================================================================================================== 00:07:49.858 [2024-12-06T13:46:49.262Z] Total : 8612.00 33.64 0.00 0.00 0.00 0.00 0.00 00:07:49.858 00:07:49.858 00:07:49.858 Latency(us) 00:07:49.858 [2024-12-06T13:46:49.262Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:49.858 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:49.858 Nvme0n1 : 10.01 8616.04 33.66 0.00 0.00 14852.61 5093.93 268816.76 00:07:49.858 [2024-12-06T13:46:49.262Z] =================================================================================================================== 00:07:49.858 [2024-12-06T13:46:49.262Z] Total : 8616.04 33.66 0.00 0.00 14852.61 5093.93 268816.76 00:07:49.858 { 00:07:49.858 "results": [ 00:07:49.858 { 00:07:49.858 "job": "Nvme0n1", 00:07:49.858 "core_mask": "0x2", 00:07:49.858 "workload": "randwrite", 00:07:49.858 "status": "finished", 00:07:49.858 "queue_depth": 128, 00:07:49.858 "io_size": 4096, 00:07:49.858 "runtime": 10.010164, 00:07:49.858 "iops": 8616.042654246225, 00:07:49.858 "mibps": 33.656416618149315, 00:07:49.858 "io_failed": 0, 00:07:49.858 "io_timeout": 0, 00:07:49.858 "avg_latency_us": 14852.612321677027, 00:07:49.858 "min_latency_us": 5093.9345454545455, 00:07:49.858 "max_latency_us": 268816.75636363635 00:07:49.858 } 00:07:49.858 ], 00:07:49.858 "core_count": 1 00:07:49.858 } 00:07:49.858 13:46:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 63537 00:07:49.858 13:46:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 63537 ']' 00:07:49.858 13:46:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 63537 00:07:49.858 13:46:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:07:49.858 13:46:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:49.858 13:46:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63537 00:07:49.858 killing process with pid 63537 00:07:49.858 Received shutdown signal, test time was about 10.000000 seconds 00:07:49.858 00:07:49.858 Latency(us) 00:07:49.858 [2024-12-06T13:46:49.262Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:49.858 [2024-12-06T13:46:49.262Z] =================================================================================================================== 00:07:49.858 [2024-12-06T13:46:49.262Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:49.858 13:46:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:49.858 13:46:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:49.858 13:46:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63537' 00:07:49.859 13:46:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 63537 00:07:49.859 13:46:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 63537 00:07:50.118 13:46:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:50.377 13:46:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:50.636 13:46:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 96306a0d-646f-4a0c-9de2-7e2b4c5785bc 00:07:50.636 13:46:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:50.894 13:46:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:50.895 13:46:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:07:50.895 13:46:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 63184 00:07:50.895 13:46:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 63184 00:07:50.895 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 63184 Killed "${NVMF_APP[@]}" "$@" 00:07:50.895 13:46:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:07:50.895 13:46:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:07:50.895 13:46:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:50.895 13:46:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:50.895 13:46:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:50.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:50.895 13:46:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=63693 00:07:50.895 13:46:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:50.895 13:46:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 63693 00:07:50.895 13:46:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 63693 ']' 00:07:50.895 13:46:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:50.895 13:46:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:50.895 13:46:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:50.895 13:46:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:50.895 13:46:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:50.895 [2024-12-06 13:46:50.241607] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:07:50.895 [2024-12-06 13:46:50.242040] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:51.153 [2024-12-06 13:46:50.388569] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.153 [2024-12-06 13:46:50.441640] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:51.153 [2024-12-06 13:46:50.441987] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:51.153 [2024-12-06 13:46:50.442129] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:51.153 [2024-12-06 13:46:50.442185] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:51.153 [2024-12-06 13:46:50.442286] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:51.153 [2024-12-06 13:46:50.442730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.153 [2024-12-06 13:46:50.511434] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:52.090 13:46:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:52.090 13:46:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:52.090 13:46:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:52.090 13:46:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:52.090 13:46:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:52.090 13:46:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:52.090 13:46:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:52.090 [2024-12-06 13:46:51.488868] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:07:52.090 [2024-12-06 13:46:51.489536] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:07:52.090 [2024-12-06 13:46:51.489719] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:07:52.349 13:46:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:07:52.349 13:46:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev f7c347f9-f721-4f27-87c4-8828f1566340 00:07:52.349 13:46:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=f7c347f9-f721-4f27-87c4-8828f1566340 00:07:52.349 13:46:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:52.349 13:46:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:52.349 13:46:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:52.349 13:46:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:52.349 13:46:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:52.607 13:46:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f7c347f9-f721-4f27-87c4-8828f1566340 -t 2000 00:07:52.867 [ 00:07:52.867 { 00:07:52.867 "name": "f7c347f9-f721-4f27-87c4-8828f1566340", 00:07:52.867 "aliases": [ 00:07:52.867 "lvs/lvol" 00:07:52.867 ], 00:07:52.867 "product_name": "Logical Volume", 00:07:52.867 "block_size": 4096, 00:07:52.867 "num_blocks": 38912, 00:07:52.867 "uuid": "f7c347f9-f721-4f27-87c4-8828f1566340", 00:07:52.867 "assigned_rate_limits": { 00:07:52.867 "rw_ios_per_sec": 0, 00:07:52.867 "rw_mbytes_per_sec": 0, 00:07:52.867 "r_mbytes_per_sec": 0, 00:07:52.867 "w_mbytes_per_sec": 0 00:07:52.867 }, 00:07:52.867 "claimed": false, 00:07:52.867 "zoned": false, 00:07:52.867 "supported_io_types": { 00:07:52.867 "read": true, 00:07:52.867 "write": true, 00:07:52.867 "unmap": true, 00:07:52.867 "flush": false, 00:07:52.867 "reset": true, 00:07:52.867 "nvme_admin": false, 00:07:52.867 "nvme_io": false, 00:07:52.867 "nvme_io_md": false, 00:07:52.867 "write_zeroes": true, 00:07:52.867 "zcopy": false, 00:07:52.867 "get_zone_info": false, 00:07:52.867 "zone_management": false, 00:07:52.867 "zone_append": false, 00:07:52.867 "compare": false, 00:07:52.867 "compare_and_write": false, 00:07:52.867 "abort": false, 00:07:52.867 "seek_hole": true, 00:07:52.867 "seek_data": true, 00:07:52.867 "copy": false, 00:07:52.867 "nvme_iov_md": false 00:07:52.867 }, 00:07:52.867 "driver_specific": { 00:07:52.867 "lvol": { 00:07:52.867 "lvol_store_uuid": "96306a0d-646f-4a0c-9de2-7e2b4c5785bc", 00:07:52.867 "base_bdev": "aio_bdev", 00:07:52.867 "thin_provision": false, 00:07:52.867 "num_allocated_clusters": 38, 00:07:52.867 "snapshot": false, 00:07:52.867 "clone": false, 00:07:52.867 "esnap_clone": false 00:07:52.867 } 00:07:52.867 } 00:07:52.867 } 00:07:52.867 ] 00:07:52.867 13:46:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:52.867 13:46:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 96306a0d-646f-4a0c-9de2-7e2b4c5785bc 00:07:52.867 13:46:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:07:53.126 13:46:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:07:53.126 13:46:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 96306a0d-646f-4a0c-9de2-7e2b4c5785bc 00:07:53.126 13:46:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:07:53.385 13:46:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:07:53.385 13:46:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:53.645 [2024-12-06 13:46:52.798733] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:53.645 13:46:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 96306a0d-646f-4a0c-9de2-7e2b4c5785bc 00:07:53.645 13:46:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:07:53.645 13:46:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 96306a0d-646f-4a0c-9de2-7e2b4c5785bc 00:07:53.645 13:46:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:53.645 13:46:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:53.645 13:46:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:53.645 13:46:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:53.645 13:46:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:53.645 13:46:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:53.645 13:46:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:53.645 13:46:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:53.645 13:46:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 96306a0d-646f-4a0c-9de2-7e2b4c5785bc 00:07:53.904 request: 00:07:53.904 { 00:07:53.904 "uuid": "96306a0d-646f-4a0c-9de2-7e2b4c5785bc", 00:07:53.904 "method": "bdev_lvol_get_lvstores", 00:07:53.904 "req_id": 1 00:07:53.904 } 00:07:53.904 Got JSON-RPC error response 00:07:53.904 response: 00:07:53.904 { 00:07:53.904 "code": -19, 00:07:53.904 "message": "No such device" 00:07:53.904 } 00:07:53.904 13:46:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:07:53.904 13:46:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:53.904 13:46:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:53.904 13:46:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:53.904 13:46:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:54.162 aio_bdev 00:07:54.162 13:46:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev f7c347f9-f721-4f27-87c4-8828f1566340 00:07:54.162 13:46:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=f7c347f9-f721-4f27-87c4-8828f1566340 00:07:54.162 13:46:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:54.162 13:46:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:54.162 13:46:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:54.162 13:46:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:54.163 13:46:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:54.421 13:46:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f7c347f9-f721-4f27-87c4-8828f1566340 -t 2000 00:07:54.680 [ 00:07:54.680 { 00:07:54.680 "name": "f7c347f9-f721-4f27-87c4-8828f1566340", 00:07:54.680 "aliases": [ 00:07:54.680 "lvs/lvol" 00:07:54.680 ], 00:07:54.680 "product_name": "Logical Volume", 00:07:54.680 "block_size": 4096, 00:07:54.680 "num_blocks": 38912, 00:07:54.680 "uuid": "f7c347f9-f721-4f27-87c4-8828f1566340", 00:07:54.680 "assigned_rate_limits": { 00:07:54.680 "rw_ios_per_sec": 0, 00:07:54.680 "rw_mbytes_per_sec": 0, 00:07:54.680 "r_mbytes_per_sec": 0, 00:07:54.680 "w_mbytes_per_sec": 0 00:07:54.680 }, 00:07:54.680 "claimed": false, 00:07:54.680 "zoned": false, 00:07:54.680 "supported_io_types": { 00:07:54.680 "read": true, 00:07:54.680 "write": true, 00:07:54.680 "unmap": true, 00:07:54.680 "flush": false, 00:07:54.680 "reset": true, 00:07:54.680 "nvme_admin": false, 00:07:54.680 "nvme_io": false, 00:07:54.680 "nvme_io_md": false, 00:07:54.680 "write_zeroes": true, 00:07:54.680 "zcopy": false, 00:07:54.680 "get_zone_info": false, 00:07:54.680 "zone_management": false, 00:07:54.680 "zone_append": false, 00:07:54.680 "compare": false, 00:07:54.680 "compare_and_write": false, 00:07:54.680 "abort": false, 00:07:54.680 "seek_hole": true, 00:07:54.680 "seek_data": true, 00:07:54.680 "copy": false, 00:07:54.680 "nvme_iov_md": false 00:07:54.680 }, 00:07:54.680 "driver_specific": { 00:07:54.680 "lvol": { 00:07:54.680 "lvol_store_uuid": "96306a0d-646f-4a0c-9de2-7e2b4c5785bc", 00:07:54.680 "base_bdev": "aio_bdev", 00:07:54.680 "thin_provision": false, 00:07:54.680 "num_allocated_clusters": 38, 00:07:54.680 "snapshot": false, 00:07:54.680 "clone": false, 00:07:54.680 "esnap_clone": false 00:07:54.680 } 00:07:54.680 } 00:07:54.680 } 00:07:54.680 ] 00:07:54.680 13:46:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:54.680 13:46:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:54.680 13:46:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 96306a0d-646f-4a0c-9de2-7e2b4c5785bc 00:07:54.939 13:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:54.939 13:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 96306a0d-646f-4a0c-9de2-7e2b4c5785bc 00:07:54.939 13:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:55.198 13:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:55.198 13:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete f7c347f9-f721-4f27-87c4-8828f1566340 00:07:55.456 13:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 96306a0d-646f-4a0c-9de2-7e2b4c5785bc 00:07:55.714 13:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:55.714 13:46:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:56.354 ************************************ 00:07:56.354 END TEST lvs_grow_dirty 00:07:56.354 ************************************ 00:07:56.354 00:07:56.354 real 0m20.923s 00:07:56.354 user 0m42.924s 00:07:56.354 sys 0m8.147s 00:07:56.354 13:46:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:56.354 13:46:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:56.354 13:46:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:07:56.354 13:46:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:07:56.354 13:46:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:07:56.354 13:46:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:07:56.354 13:46:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:07:56.354 13:46:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:07:56.354 13:46:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:07:56.354 13:46:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:07:56.354 13:46:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:07:56.354 nvmf_trace.0 00:07:56.354 13:46:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:07:56.354 13:46:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:07:56.354 13:46:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:56.354 13:46:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:07:56.676 13:46:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:56.676 13:46:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:07:56.676 13:46:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:56.676 13:46:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:56.676 rmmod nvme_tcp 00:07:56.676 rmmod nvme_fabrics 00:07:56.676 rmmod nvme_keyring 00:07:56.676 13:46:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:56.676 13:46:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:07:56.676 13:46:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:07:56.676 13:46:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 63693 ']' 00:07:56.676 13:46:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 63693 00:07:56.676 13:46:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 63693 ']' 00:07:56.677 13:46:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 63693 00:07:56.677 13:46:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:07:56.677 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:56.677 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63693 00:07:56.677 killing process with pid 63693 00:07:56.677 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:56.677 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:56.677 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63693' 00:07:56.677 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 63693 00:07:56.677 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 63693 00:07:56.935 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:56.935 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:56.935 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:56.935 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:07:56.935 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:07:56.935 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:07:56.935 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:56.935 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:56.935 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:07:56.935 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:07:56.935 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:07:56.935 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:07:56.935 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:07:57.193 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:07:57.193 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:07:57.193 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:07:57.193 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:07:57.193 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:07:57.193 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:07:57.193 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:07:57.193 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:57.193 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:57.193 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:07:57.193 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:57.193 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:57.193 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:57.193 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:07:57.193 ************************************ 00:07:57.193 END TEST nvmf_lvs_grow 00:07:57.193 ************************************ 00:07:57.193 00:07:57.193 real 0m41.532s 00:07:57.193 user 1m6.068s 00:07:57.193 sys 0m11.736s 00:07:57.193 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:57.193 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:57.193 13:46:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:57.193 13:46:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:57.193 13:46:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:57.193 13:46:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:57.193 ************************************ 00:07:57.193 START TEST nvmf_bdev_io_wait 00:07:57.193 ************************************ 00:07:57.193 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:57.451 * Looking for test storage... 00:07:57.451 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:57.451 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:57.451 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:07:57.451 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:57.451 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:57.451 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:57.451 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:57.451 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:57.451 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:07:57.451 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:07:57.451 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:07:57.451 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:07:57.451 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:07:57.451 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:07:57.451 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:07:57.451 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:57.451 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:07:57.451 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:07:57.451 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:57.451 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:57.451 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:07:57.451 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:07:57.451 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:57.451 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:07:57.451 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:07:57.452 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:07:57.452 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:07:57.452 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:57.452 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:07:57.452 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:07:57.452 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:57.452 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:57.452 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:07:57.452 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:57.452 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:57.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.452 --rc genhtml_branch_coverage=1 00:07:57.452 --rc genhtml_function_coverage=1 00:07:57.452 --rc genhtml_legend=1 00:07:57.452 --rc geninfo_all_blocks=1 00:07:57.452 --rc geninfo_unexecuted_blocks=1 00:07:57.452 00:07:57.452 ' 00:07:57.452 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:57.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.452 --rc genhtml_branch_coverage=1 00:07:57.452 --rc genhtml_function_coverage=1 00:07:57.452 --rc genhtml_legend=1 00:07:57.452 --rc geninfo_all_blocks=1 00:07:57.452 --rc geninfo_unexecuted_blocks=1 00:07:57.452 00:07:57.452 ' 00:07:57.452 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:57.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.452 --rc genhtml_branch_coverage=1 00:07:57.452 --rc genhtml_function_coverage=1 00:07:57.452 --rc genhtml_legend=1 00:07:57.452 --rc geninfo_all_blocks=1 00:07:57.452 --rc geninfo_unexecuted_blocks=1 00:07:57.452 00:07:57.452 ' 00:07:57.452 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:57.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.452 --rc genhtml_branch_coverage=1 00:07:57.452 --rc genhtml_function_coverage=1 00:07:57.452 --rc genhtml_legend=1 00:07:57.452 --rc geninfo_all_blocks=1 00:07:57.452 --rc geninfo_unexecuted_blocks=1 00:07:57.452 00:07:57.452 ' 00:07:57.452 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:57.452 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:07:57.452 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:57.452 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:57.452 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:57.452 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:57.452 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:57.452 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:57.452 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:57.452 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:57.452 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:57.452 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:57.452 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:07:57.452 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=cfa2def7-c8af-457f-82a0-b312efdea7f4 00:07:57.452 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:57.711 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:57.711 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:57.711 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:57.711 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:57.711 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:07:57.711 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:57.711 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:57.711 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:57.711 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.711 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.711 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.711 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:07:57.711 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.711 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:07:57.711 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:57.711 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:57.711 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:57.711 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:57.711 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:57.711 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:57.711 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:57.711 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:57.711 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:57.711 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:57.711 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:57.711 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:57.711 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:07:57.711 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:57.711 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:57.711 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:57.711 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:57.711 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:57.711 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:57.711 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:57.711 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:57.711 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:07:57.711 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:07:57.711 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:07:57.711 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:07:57.711 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:07:57.711 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@460 -- # nvmf_veth_init 00:07:57.711 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:57.711 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:57.711 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:57.711 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:57.711 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:57.711 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:57.711 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:57.711 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:57.711 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:57.712 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:57.712 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:57.712 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:57.712 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:57.712 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:57.712 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:57.712 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:57.712 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:57.712 Cannot find device "nvmf_init_br" 00:07:57.712 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:07:57.712 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:57.712 Cannot find device "nvmf_init_br2" 00:07:57.712 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:07:57.712 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:57.712 Cannot find device "nvmf_tgt_br" 00:07:57.712 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:07:57.712 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:57.712 Cannot find device "nvmf_tgt_br2" 00:07:57.712 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:07:57.712 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:57.712 Cannot find device "nvmf_init_br" 00:07:57.712 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:07:57.712 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:57.712 Cannot find device "nvmf_init_br2" 00:07:57.712 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:07:57.712 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:57.712 Cannot find device "nvmf_tgt_br" 00:07:57.712 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:07:57.712 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:57.712 Cannot find device "nvmf_tgt_br2" 00:07:57.712 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:07:57.712 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:57.712 Cannot find device "nvmf_br" 00:07:57.712 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:07:57.712 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:57.712 Cannot find device "nvmf_init_if" 00:07:57.712 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:07:57.712 13:46:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:57.712 Cannot find device "nvmf_init_if2" 00:07:57.712 13:46:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:07:57.712 13:46:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:57.712 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:57.712 13:46:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:07:57.712 13:46:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:57.712 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:57.712 13:46:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:07:57.712 13:46:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:57.712 13:46:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:57.712 13:46:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:57.712 13:46:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:57.712 13:46:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:57.712 13:46:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:57.712 13:46:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:57.712 13:46:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:57.712 13:46:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:57.712 13:46:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:57.712 13:46:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:57.712 13:46:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:57.970 13:46:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:57.970 13:46:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:57.970 13:46:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:57.970 13:46:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:57.970 13:46:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:57.970 13:46:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:57.970 13:46:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:57.970 13:46:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:57.970 13:46:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:57.970 13:46:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:57.970 13:46:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:57.970 13:46:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:57.970 13:46:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:57.970 13:46:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:57.970 13:46:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:57.970 13:46:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:57.971 13:46:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:57.971 13:46:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:57.971 13:46:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:57.971 13:46:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:57.971 13:46:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:57.971 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:57.971 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:07:57.971 00:07:57.971 --- 10.0.0.3 ping statistics --- 00:07:57.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:57.971 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:07:57.971 13:46:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:57.971 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:57.971 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.094 ms 00:07:57.971 00:07:57.971 --- 10.0.0.4 ping statistics --- 00:07:57.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:57.971 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:07:57.971 13:46:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:57.971 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:57.971 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:07:57.971 00:07:57.971 --- 10.0.0.1 ping statistics --- 00:07:57.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:57.971 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:07:57.971 13:46:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:57.971 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:57.971 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:07:57.971 00:07:57.971 --- 10.0.0.2 ping statistics --- 00:07:57.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:57.971 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:07:57.971 13:46:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:57.971 13:46:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@461 -- # return 0 00:07:57.971 13:46:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:57.971 13:46:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:57.971 13:46:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:57.971 13:46:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:57.971 13:46:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:57.971 13:46:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:57.971 13:46:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:57.971 13:46:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:07:57.971 13:46:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:57.971 13:46:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:57.971 13:46:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:57.971 13:46:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=64067 00:07:57.971 13:46:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:07:57.971 13:46:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 64067 00:07:57.971 13:46:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 64067 ']' 00:07:57.971 13:46:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:57.971 13:46:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:57.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:57.971 13:46:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:57.971 13:46:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:57.971 13:46:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:57.971 [2024-12-06 13:46:57.350222] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:07:57.971 [2024-12-06 13:46:57.350323] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:58.230 [2024-12-06 13:46:57.495338] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:58.230 [2024-12-06 13:46:57.548047] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:58.230 [2024-12-06 13:46:57.548108] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:58.230 [2024-12-06 13:46:57.548119] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:58.230 [2024-12-06 13:46:57.548126] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:58.230 [2024-12-06 13:46:57.548132] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:58.230 [2024-12-06 13:46:57.549353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:58.230 [2024-12-06 13:46:57.549471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:58.230 [2024-12-06 13:46:57.549599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.230 [2024-12-06 13:46:57.549601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:59.167 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:59.167 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:07:59.167 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:59.167 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:59.167 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:59.167 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:59.167 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:07:59.167 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.167 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:59.167 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.167 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:07:59.167 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.167 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:59.167 [2024-12-06 13:46:58.446790] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:59.167 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.167 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:59.167 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.167 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:59.167 [2024-12-06 13:46:58.464084] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:59.167 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.167 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:59.167 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.167 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:59.167 Malloc0 00:07:59.167 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.167 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:59.167 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.167 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:59.167 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.167 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:59.167 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.167 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:59.167 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.167 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:07:59.167 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.167 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:59.167 [2024-12-06 13:46:58.521337] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:59.167 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.167 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=64112 00:07:59.167 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=64114 00:07:59.167 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:07:59.167 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:07:59.167 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:59.167 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:59.167 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:59.167 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=64116 00:07:59.167 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:59.167 { 00:07:59.167 "params": { 00:07:59.167 "name": "Nvme$subsystem", 00:07:59.167 "trtype": "$TEST_TRANSPORT", 00:07:59.167 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:59.167 "adrfam": "ipv4", 00:07:59.167 "trsvcid": "$NVMF_PORT", 00:07:59.167 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:59.167 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:59.167 "hdgst": ${hdgst:-false}, 00:07:59.167 "ddgst": ${ddgst:-false} 00:07:59.167 }, 00:07:59.167 "method": "bdev_nvme_attach_controller" 00:07:59.168 } 00:07:59.168 EOF 00:07:59.168 )") 00:07:59.168 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:07:59.168 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:07:59.168 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=64117 00:07:59.168 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:59.168 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:07:59.168 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:59.168 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:59.168 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:59.168 { 00:07:59.168 "params": { 00:07:59.168 "name": "Nvme$subsystem", 00:07:59.168 "trtype": "$TEST_TRANSPORT", 00:07:59.168 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:59.168 "adrfam": "ipv4", 00:07:59.168 "trsvcid": "$NVMF_PORT", 00:07:59.168 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:59.168 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:59.168 "hdgst": ${hdgst:-false}, 00:07:59.168 "ddgst": ${ddgst:-false} 00:07:59.168 }, 00:07:59.168 "method": "bdev_nvme_attach_controller" 00:07:59.168 } 00:07:59.168 EOF 00:07:59.168 )") 00:07:59.168 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:07:59.168 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:59.168 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:07:59.168 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:59.168 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:07:59.168 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:59.168 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:07:59.168 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:59.168 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:59.168 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:59.168 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:59.168 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:59.168 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:59.168 { 00:07:59.168 "params": { 00:07:59.168 "name": "Nvme$subsystem", 00:07:59.168 "trtype": "$TEST_TRANSPORT", 00:07:59.168 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:59.168 "adrfam": "ipv4", 00:07:59.168 "trsvcid": "$NVMF_PORT", 00:07:59.168 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:59.168 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:59.168 "hdgst": ${hdgst:-false}, 00:07:59.168 "ddgst": ${ddgst:-false} 00:07:59.168 }, 00:07:59.168 "method": "bdev_nvme_attach_controller" 00:07:59.168 } 00:07:59.168 EOF 00:07:59.168 )") 00:07:59.168 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:59.168 { 00:07:59.168 "params": { 00:07:59.168 "name": "Nvme$subsystem", 00:07:59.168 "trtype": "$TEST_TRANSPORT", 00:07:59.168 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:59.168 "adrfam": "ipv4", 00:07:59.168 "trsvcid": "$NVMF_PORT", 00:07:59.168 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:59.168 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:59.168 "hdgst": ${hdgst:-false}, 00:07:59.168 "ddgst": ${ddgst:-false} 00:07:59.168 }, 00:07:59.168 "method": "bdev_nvme_attach_controller" 00:07:59.168 } 00:07:59.168 EOF 00:07:59.168 )") 00:07:59.168 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:59.168 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:59.168 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:59.168 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:59.168 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:59.168 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:59.168 "params": { 00:07:59.168 "name": "Nvme1", 00:07:59.168 "trtype": "tcp", 00:07:59.168 "traddr": "10.0.0.3", 00:07:59.168 "adrfam": "ipv4", 00:07:59.168 "trsvcid": "4420", 00:07:59.168 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:59.168 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:59.168 "hdgst": false, 00:07:59.168 "ddgst": false 00:07:59.168 }, 00:07:59.168 "method": "bdev_nvme_attach_controller" 00:07:59.168 }' 00:07:59.168 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:59.168 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:59.168 "params": { 00:07:59.168 "name": "Nvme1", 00:07:59.168 "trtype": "tcp", 00:07:59.168 "traddr": "10.0.0.3", 00:07:59.168 "adrfam": "ipv4", 00:07:59.168 "trsvcid": "4420", 00:07:59.168 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:59.168 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:59.168 "hdgst": false, 00:07:59.168 "ddgst": false 00:07:59.168 }, 00:07:59.168 "method": "bdev_nvme_attach_controller" 00:07:59.168 }' 00:07:59.168 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:59.168 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:59.168 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:59.168 "params": { 00:07:59.168 "name": "Nvme1", 00:07:59.168 "trtype": "tcp", 00:07:59.168 "traddr": "10.0.0.3", 00:07:59.168 "adrfam": "ipv4", 00:07:59.168 "trsvcid": "4420", 00:07:59.168 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:59.168 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:59.168 "hdgst": false, 00:07:59.168 "ddgst": false 00:07:59.168 }, 00:07:59.168 "method": "bdev_nvme_attach_controller" 00:07:59.168 }' 00:07:59.168 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:59.427 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:59.427 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:59.427 "params": { 00:07:59.427 "name": "Nvme1", 00:07:59.427 "trtype": "tcp", 00:07:59.427 "traddr": "10.0.0.3", 00:07:59.427 "adrfam": "ipv4", 00:07:59.427 "trsvcid": "4420", 00:07:59.427 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:59.427 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:59.428 "hdgst": false, 00:07:59.428 "ddgst": false 00:07:59.428 }, 00:07:59.428 "method": "bdev_nvme_attach_controller" 00:07:59.428 }' 00:07:59.428 [2024-12-06 13:46:58.594462] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:07:59.428 [2024-12-06 13:46:58.595329] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:07:59.428 [2024-12-06 13:46:58.601562] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:07:59.428 [2024-12-06 13:46:58.601648] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:07:59.428 13:46:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 64112 00:07:59.428 [2024-12-06 13:46:58.626284] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:07:59.428 [2024-12-06 13:46:58.626374] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:07:59.428 [2024-12-06 13:46:58.639684] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:07:59.428 [2024-12-06 13:46:58.639789] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:07:59.686 [2024-12-06 13:46:58.831456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.686 [2024-12-06 13:46:58.898543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:07:59.686 [2024-12-06 13:46:58.912592] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:59.686 [2024-12-06 13:46:58.930870] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.686 [2024-12-06 13:46:58.987591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:07:59.686 [2024-12-06 13:46:58.996088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.686 [2024-12-06 13:46:59.001719] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:59.686 [2024-12-06 13:46:59.045803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:07:59.686 [2024-12-06 13:46:59.058159] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:59.686 Running I/O for 1 seconds... 00:07:59.686 [2024-12-06 13:46:59.076567] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.945 Running I/O for 1 seconds... 00:07:59.945 [2024-12-06 13:46:59.147690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:59.945 [2024-12-06 13:46:59.161678] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:59.945 Running I/O for 1 seconds... 00:07:59.945 Running I/O for 1 seconds... 00:08:00.882 6207.00 IOPS, 24.25 MiB/s 00:08:00.882 Latency(us) 00:08:00.882 [2024-12-06T13:47:00.286Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:00.882 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:00.882 Nvme1n1 : 1.03 6227.24 24.33 0.00 0.00 20459.07 8221.79 41943.04 00:08:00.882 [2024-12-06T13:47:00.286Z] =================================================================================================================== 00:08:00.882 [2024-12-06T13:47:00.286Z] Total : 6227.24 24.33 0.00 0.00 20459.07 8221.79 41943.04 00:08:00.882 5244.00 IOPS, 20.48 MiB/s 00:08:00.882 Latency(us) 00:08:00.882 [2024-12-06T13:47:00.286Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:00.882 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:00.882 Nvme1n1 : 1.02 5266.17 20.57 0.00 0.00 24061.34 11498.59 34078.72 00:08:00.882 [2024-12-06T13:47:00.286Z] =================================================================================================================== 00:08:00.882 [2024-12-06T13:47:00.286Z] Total : 5266.17 20.57 0.00 0.00 24061.34 11498.59 34078.72 00:08:00.882 185632.00 IOPS, 725.12 MiB/s 00:08:00.882 Latency(us) 00:08:00.882 [2024-12-06T13:47:00.287Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:00.883 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:00.883 Nvme1n1 : 1.00 185309.04 723.86 0.00 0.00 687.23 318.37 1712.87 00:08:00.883 [2024-12-06T13:47:00.287Z] =================================================================================================================== 00:08:00.883 [2024-12-06T13:47:00.287Z] Total : 185309.04 723.86 0.00 0.00 687.23 318.37 1712.87 00:08:01.143 6662.00 IOPS, 26.02 MiB/s 00:08:01.143 Latency(us) 00:08:01.143 [2024-12-06T13:47:00.547Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:01.143 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:01.143 Nvme1n1 : 1.01 6775.79 26.47 0.00 0.00 18829.45 5362.04 48377.48 00:08:01.143 [2024-12-06T13:47:00.547Z] =================================================================================================================== 00:08:01.143 [2024-12-06T13:47:00.547Z] Total : 6775.79 26.47 0.00 0.00 18829.45 5362.04 48377.48 00:08:01.143 13:47:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 64114 00:08:01.143 13:47:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 64116 00:08:01.143 13:47:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 64117 00:08:01.143 13:47:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:01.143 13:47:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.143 13:47:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:01.143 13:47:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.143 13:47:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:01.143 13:47:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:01.143 13:47:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:01.143 13:47:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:01.402 13:47:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:01.402 13:47:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:01.402 13:47:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:01.402 13:47:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:01.402 rmmod nvme_tcp 00:08:01.402 rmmod nvme_fabrics 00:08:01.402 rmmod nvme_keyring 00:08:01.402 13:47:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:01.402 13:47:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:01.402 13:47:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:01.402 13:47:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 64067 ']' 00:08:01.402 13:47:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 64067 00:08:01.402 13:47:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 64067 ']' 00:08:01.402 13:47:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 64067 00:08:01.402 13:47:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:08:01.402 13:47:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:01.402 13:47:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64067 00:08:01.402 13:47:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:01.402 13:47:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:01.402 killing process with pid 64067 00:08:01.402 13:47:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64067' 00:08:01.402 13:47:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 64067 00:08:01.402 13:47:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 64067 00:08:01.662 13:47:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:01.662 13:47:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:01.662 13:47:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:01.662 13:47:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:08:01.662 13:47:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:08:01.662 13:47:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:08:01.662 13:47:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:01.662 13:47:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:01.662 13:47:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:01.662 13:47:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:01.662 13:47:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:01.662 13:47:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:01.662 13:47:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:01.662 13:47:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:01.662 13:47:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:01.662 13:47:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:01.662 13:47:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:01.662 13:47:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:01.662 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:01.662 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:01.662 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:01.921 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:01.921 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:01.921 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:01.921 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:01.921 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:01.921 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:08:01.921 00:08:01.921 real 0m4.532s 00:08:01.921 user 0m18.292s 00:08:01.921 sys 0m2.250s 00:08:01.921 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:01.921 ************************************ 00:08:01.921 END TEST nvmf_bdev_io_wait 00:08:01.921 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:01.921 ************************************ 00:08:01.921 13:47:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:01.921 13:47:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:01.921 13:47:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:01.921 13:47:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:01.921 ************************************ 00:08:01.921 START TEST nvmf_queue_depth 00:08:01.921 ************************************ 00:08:01.921 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:01.921 * Looking for test storage... 00:08:01.921 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:01.921 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:01.921 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:08:01.921 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:02.183 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:02.183 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:02.183 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:02.183 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:02.183 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:02.183 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:02.183 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:02.183 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:02.183 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:02.183 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:02.183 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:02.183 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:02.183 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:02.183 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:02.183 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:02.183 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:02.183 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:02.183 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:02.183 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:02.183 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:02.183 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:02.183 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:02.183 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:02.183 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:02.183 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:02.183 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:02.183 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:02.183 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:02.183 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:02.183 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:02.183 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:02.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.183 --rc genhtml_branch_coverage=1 00:08:02.183 --rc genhtml_function_coverage=1 00:08:02.183 --rc genhtml_legend=1 00:08:02.183 --rc geninfo_all_blocks=1 00:08:02.183 --rc geninfo_unexecuted_blocks=1 00:08:02.183 00:08:02.183 ' 00:08:02.183 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:02.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.183 --rc genhtml_branch_coverage=1 00:08:02.183 --rc genhtml_function_coverage=1 00:08:02.183 --rc genhtml_legend=1 00:08:02.183 --rc geninfo_all_blocks=1 00:08:02.183 --rc geninfo_unexecuted_blocks=1 00:08:02.183 00:08:02.183 ' 00:08:02.183 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:02.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.183 --rc genhtml_branch_coverage=1 00:08:02.183 --rc genhtml_function_coverage=1 00:08:02.183 --rc genhtml_legend=1 00:08:02.183 --rc geninfo_all_blocks=1 00:08:02.183 --rc geninfo_unexecuted_blocks=1 00:08:02.183 00:08:02.183 ' 00:08:02.183 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:02.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.183 --rc genhtml_branch_coverage=1 00:08:02.183 --rc genhtml_function_coverage=1 00:08:02.183 --rc genhtml_legend=1 00:08:02.183 --rc geninfo_all_blocks=1 00:08:02.183 --rc geninfo_unexecuted_blocks=1 00:08:02.183 00:08:02.183 ' 00:08:02.183 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:02.183 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:02.183 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:02.183 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:02.183 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:02.183 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:02.183 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:02.183 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:02.183 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:02.183 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:02.183 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:02.183 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:02.183 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:08:02.183 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=cfa2def7-c8af-457f-82a0-b312efdea7f4 00:08:02.183 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:02.183 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:02.183 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:02.183 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:02.183 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:02.183 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:02.183 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:02.183 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:02.183 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:02.184 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.184 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.184 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.184 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:02.184 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.184 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:08:02.184 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:02.184 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:02.184 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:02.184 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:02.184 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:02.184 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:02.184 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:02.184 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:02.184 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:02.184 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:02.184 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:02.184 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:02.184 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:02.184 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:02.184 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:02.184 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:02.184 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:02.184 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:02.184 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:02.184 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:02.184 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:02.184 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:02.184 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:02.184 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:02.184 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:02.184 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:02.184 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:02.184 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:02.184 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:02.184 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:02.184 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:02.184 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:02.184 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:02.184 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:02.184 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:02.184 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:02.184 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:02.184 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:02.184 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:02.184 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:02.184 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:02.184 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:02.184 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:02.184 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:02.184 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:02.184 Cannot find device "nvmf_init_br" 00:08:02.184 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:08:02.184 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:02.184 Cannot find device "nvmf_init_br2" 00:08:02.184 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:08:02.184 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:02.184 Cannot find device "nvmf_tgt_br" 00:08:02.184 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:08:02.184 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:02.184 Cannot find device "nvmf_tgt_br2" 00:08:02.184 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:08:02.184 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:02.184 Cannot find device "nvmf_init_br" 00:08:02.184 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:08:02.184 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:02.184 Cannot find device "nvmf_init_br2" 00:08:02.184 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:08:02.184 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:02.184 Cannot find device "nvmf_tgt_br" 00:08:02.184 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:08:02.184 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:02.184 Cannot find device "nvmf_tgt_br2" 00:08:02.184 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:08:02.184 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:02.184 Cannot find device "nvmf_br" 00:08:02.184 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:08:02.185 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:02.185 Cannot find device "nvmf_init_if" 00:08:02.185 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:08:02.185 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:02.185 Cannot find device "nvmf_init_if2" 00:08:02.185 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:08:02.185 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:02.185 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:02.185 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:08:02.185 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:02.185 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:02.185 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:08:02.185 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:02.185 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:02.185 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:02.185 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:02.185 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:02.445 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:02.445 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:02.445 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:02.445 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:02.445 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:02.445 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:02.445 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:02.445 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:02.445 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:02.445 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:02.445 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:02.445 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:02.445 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:02.445 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:02.445 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:02.445 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:02.445 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:02.445 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:02.445 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:02.445 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:02.445 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:02.445 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:02.445 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:02.445 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:02.445 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:02.445 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:02.445 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:02.445 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:02.445 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:02.445 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.080 ms 00:08:02.445 00:08:02.445 --- 10.0.0.3 ping statistics --- 00:08:02.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:02.445 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:08:02.445 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:02.445 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:02.445 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.053 ms 00:08:02.445 00:08:02.445 --- 10.0.0.4 ping statistics --- 00:08:02.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:02.445 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:08:02.445 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:02.445 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:02.445 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:08:02.445 00:08:02.445 --- 10.0.0.1 ping statistics --- 00:08:02.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:02.445 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:08:02.445 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:02.445 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:02.445 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:08:02.445 00:08:02.445 --- 10.0.0.2 ping statistics --- 00:08:02.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:02.445 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:08:02.445 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:02.445 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@461 -- # return 0 00:08:02.445 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:02.445 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:02.445 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:02.445 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:02.445 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:02.445 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:02.445 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:02.445 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:02.445 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:02.445 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:02.445 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:02.445 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=64403 00:08:02.445 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 64403 00:08:02.445 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 64403 ']' 00:08:02.445 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:02.445 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:02.445 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:02.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:02.445 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:02.445 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:02.445 13:47:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:02.705 [2024-12-06 13:47:01.906345] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:08:02.705 [2024-12-06 13:47:01.906440] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:02.705 [2024-12-06 13:47:02.063592] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.964 [2024-12-06 13:47:02.123477] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:02.964 [2024-12-06 13:47:02.123539] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:02.964 [2024-12-06 13:47:02.123553] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:02.964 [2024-12-06 13:47:02.123564] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:02.964 [2024-12-06 13:47:02.123573] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:02.964 [2024-12-06 13:47:02.124049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:02.964 [2024-12-06 13:47:02.198513] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:02.964 13:47:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:02.964 13:47:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:02.964 13:47:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:02.964 13:47:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:02.964 13:47:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:02.964 13:47:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:02.964 13:47:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:02.964 13:47:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.964 13:47:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:02.964 [2024-12-06 13:47:02.328459] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:02.964 13:47:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.965 13:47:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:02.965 13:47:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.965 13:47:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:02.965 Malloc0 00:08:02.965 13:47:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.965 13:47:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:02.965 13:47:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.965 13:47:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:03.225 13:47:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.225 13:47:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:03.225 13:47:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.225 13:47:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:03.225 13:47:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.225 13:47:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:03.225 13:47:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.225 13:47:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:03.225 [2024-12-06 13:47:02.386771] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:03.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:03.225 13:47:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.225 13:47:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=64422 00:08:03.225 13:47:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:03.225 13:47:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 64422 /var/tmp/bdevperf.sock 00:08:03.225 13:47:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:03.225 13:47:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 64422 ']' 00:08:03.225 13:47:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:03.225 13:47:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:03.225 13:47:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:03.225 13:47:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:03.225 13:47:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:03.225 [2024-12-06 13:47:02.450823] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:08:03.225 [2024-12-06 13:47:02.451199] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64422 ] 00:08:03.225 [2024-12-06 13:47:02.604719] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.484 [2024-12-06 13:47:02.675717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.484 [2024-12-06 13:47:02.751263] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:03.484 13:47:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:03.484 13:47:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:03.484 13:47:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:03.484 13:47:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.484 13:47:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:03.744 NVMe0n1 00:08:03.744 13:47:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.744 13:47:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:03.744 Running I/O for 10 seconds... 00:08:06.059 8192.00 IOPS, 32.00 MiB/s [2024-12-06T13:47:06.399Z] 8725.50 IOPS, 34.08 MiB/s [2024-12-06T13:47:07.336Z] 9101.67 IOPS, 35.55 MiB/s [2024-12-06T13:47:08.273Z] 9349.50 IOPS, 36.52 MiB/s [2024-12-06T13:47:09.210Z] 9525.20 IOPS, 37.21 MiB/s [2024-12-06T13:47:10.146Z] 9632.50 IOPS, 37.63 MiB/s [2024-12-06T13:47:11.109Z] 9720.86 IOPS, 37.97 MiB/s [2024-12-06T13:47:12.047Z] 9758.50 IOPS, 38.12 MiB/s [2024-12-06T13:47:13.429Z] 9807.11 IOPS, 38.31 MiB/s [2024-12-06T13:47:13.429Z] 9850.40 IOPS, 38.48 MiB/s 00:08:14.025 Latency(us) 00:08:14.025 [2024-12-06T13:47:13.429Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:14.025 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:14.025 Verification LBA range: start 0x0 length 0x4000 00:08:14.025 NVMe0n1 : 10.07 9879.88 38.59 0.00 0.00 103245.13 22758.87 79119.83 00:08:14.025 [2024-12-06T13:47:13.429Z] =================================================================================================================== 00:08:14.025 [2024-12-06T13:47:13.429Z] Total : 9879.88 38.59 0.00 0.00 103245.13 22758.87 79119.83 00:08:14.025 { 00:08:14.025 "results": [ 00:08:14.025 { 00:08:14.025 "job": "NVMe0n1", 00:08:14.025 "core_mask": "0x1", 00:08:14.025 "workload": "verify", 00:08:14.025 "status": "finished", 00:08:14.025 "verify_range": { 00:08:14.025 "start": 0, 00:08:14.025 "length": 16384 00:08:14.025 }, 00:08:14.025 "queue_depth": 1024, 00:08:14.025 "io_size": 4096, 00:08:14.025 "runtime": 10.070669, 00:08:14.025 "iops": 9879.87987689795, 00:08:14.025 "mibps": 38.59328076913262, 00:08:14.025 "io_failed": 0, 00:08:14.025 "io_timeout": 0, 00:08:14.025 "avg_latency_us": 103245.12687121678, 00:08:14.025 "min_latency_us": 22758.865454545456, 00:08:14.025 "max_latency_us": 79119.82545454545 00:08:14.025 } 00:08:14.025 ], 00:08:14.025 "core_count": 1 00:08:14.025 } 00:08:14.025 13:47:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 64422 00:08:14.025 13:47:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 64422 ']' 00:08:14.025 13:47:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 64422 00:08:14.025 13:47:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:14.025 13:47:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:14.025 13:47:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64422 00:08:14.025 13:47:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:14.025 killing process with pid 64422 00:08:14.025 13:47:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:14.025 13:47:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64422' 00:08:14.025 Received shutdown signal, test time was about 10.000000 seconds 00:08:14.025 00:08:14.025 Latency(us) 00:08:14.025 [2024-12-06T13:47:13.429Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:14.025 [2024-12-06T13:47:13.429Z] =================================================================================================================== 00:08:14.025 [2024-12-06T13:47:13.429Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:14.025 13:47:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 64422 00:08:14.025 13:47:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 64422 00:08:14.025 13:47:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:14.025 13:47:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:14.025 13:47:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:14.025 13:47:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:08:14.285 13:47:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:14.285 13:47:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:08:14.285 13:47:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:14.285 13:47:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:14.285 rmmod nvme_tcp 00:08:14.285 rmmod nvme_fabrics 00:08:14.285 rmmod nvme_keyring 00:08:14.285 13:47:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:14.285 13:47:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:08:14.285 13:47:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:08:14.285 13:47:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 64403 ']' 00:08:14.285 13:47:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 64403 00:08:14.285 13:47:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 64403 ']' 00:08:14.285 13:47:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 64403 00:08:14.285 13:47:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:14.285 13:47:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:14.285 13:47:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64403 00:08:14.285 13:47:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:14.285 killing process with pid 64403 00:08:14.285 13:47:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:14.285 13:47:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64403' 00:08:14.285 13:47:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 64403 00:08:14.285 13:47:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 64403 00:08:14.545 13:47:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:14.545 13:47:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:14.545 13:47:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:14.545 13:47:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:08:14.545 13:47:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:08:14.545 13:47:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:14.545 13:47:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:08:14.545 13:47:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:14.545 13:47:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:14.545 13:47:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:14.545 13:47:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:14.545 13:47:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:14.545 13:47:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:14.545 13:47:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:14.545 13:47:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:14.545 13:47:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:14.545 13:47:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:14.545 13:47:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:14.804 13:47:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:14.804 13:47:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:14.804 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:14.804 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:14.804 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:14.804 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:14.804 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:14.804 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:14.804 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:08:14.804 00:08:14.804 real 0m12.904s 00:08:14.804 user 0m21.763s 00:08:14.804 sys 0m2.229s 00:08:14.804 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:14.804 ************************************ 00:08:14.804 END TEST nvmf_queue_depth 00:08:14.804 ************************************ 00:08:14.804 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:14.804 13:47:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:14.804 13:47:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:14.804 13:47:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:14.804 13:47:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:14.804 ************************************ 00:08:14.804 START TEST nvmf_target_multipath 00:08:14.804 ************************************ 00:08:14.804 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:15.065 * Looking for test storage... 00:08:15.065 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:15.065 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:15.065 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:08:15.065 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:15.065 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:15.065 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:15.065 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:15.065 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:15.065 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:08:15.065 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:08:15.065 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:08:15.065 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:08:15.065 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:08:15.065 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:08:15.065 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:08:15.065 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:15.065 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:08:15.065 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:08:15.065 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:15.065 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:15.065 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:08:15.065 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:08:15.065 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:15.065 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:08:15.065 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:08:15.065 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:08:15.065 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:08:15.065 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:15.065 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:08:15.065 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:08:15.065 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:15.065 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:15.065 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:08:15.065 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:15.065 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:15.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.065 --rc genhtml_branch_coverage=1 00:08:15.065 --rc genhtml_function_coverage=1 00:08:15.065 --rc genhtml_legend=1 00:08:15.065 --rc geninfo_all_blocks=1 00:08:15.065 --rc geninfo_unexecuted_blocks=1 00:08:15.065 00:08:15.065 ' 00:08:15.065 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:15.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.065 --rc genhtml_branch_coverage=1 00:08:15.065 --rc genhtml_function_coverage=1 00:08:15.065 --rc genhtml_legend=1 00:08:15.065 --rc geninfo_all_blocks=1 00:08:15.065 --rc geninfo_unexecuted_blocks=1 00:08:15.065 00:08:15.065 ' 00:08:15.065 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:15.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.065 --rc genhtml_branch_coverage=1 00:08:15.065 --rc genhtml_function_coverage=1 00:08:15.065 --rc genhtml_legend=1 00:08:15.065 --rc geninfo_all_blocks=1 00:08:15.065 --rc geninfo_unexecuted_blocks=1 00:08:15.065 00:08:15.065 ' 00:08:15.065 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:15.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.065 --rc genhtml_branch_coverage=1 00:08:15.065 --rc genhtml_function_coverage=1 00:08:15.065 --rc genhtml_legend=1 00:08:15.065 --rc geninfo_all_blocks=1 00:08:15.065 --rc geninfo_unexecuted_blocks=1 00:08:15.065 00:08:15.065 ' 00:08:15.065 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:15.065 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:15.065 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:15.065 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:15.065 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:15.065 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:15.065 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:15.065 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:15.065 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:15.065 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:15.065 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:15.065 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:15.065 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:08:15.065 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=cfa2def7-c8af-457f-82a0-b312efdea7f4 00:08:15.065 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:15.065 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:15.065 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:15.065 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:15.065 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:15.065 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:08:15.065 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:15.065 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:15.065 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:15.065 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.065 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.066 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.066 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:15.066 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.066 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:08:15.066 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:15.066 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:15.066 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:15.066 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:15.066 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:15.066 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:15.066 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:15.066 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:15.066 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:15.066 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:15.066 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:15.066 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:15.066 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:15.066 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:15.066 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:15.066 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:15.066 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:15.066 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:15.066 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:15.066 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:15.066 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:15.066 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:15.066 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:15.066 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:15.066 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:15.066 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:15.066 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:15.066 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:15.066 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:15.066 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:15.066 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:15.066 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:15.066 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:15.066 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:15.066 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:15.066 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:15.066 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:15.066 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:15.066 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:15.066 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:15.066 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:15.066 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:15.066 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:15.066 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:15.066 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:15.066 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:15.066 Cannot find device "nvmf_init_br" 00:08:15.066 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:08:15.066 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:15.066 Cannot find device "nvmf_init_br2" 00:08:15.066 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:08:15.066 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:15.066 Cannot find device "nvmf_tgt_br" 00:08:15.066 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:08:15.066 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:15.066 Cannot find device "nvmf_tgt_br2" 00:08:15.066 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:08:15.066 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:15.066 Cannot find device "nvmf_init_br" 00:08:15.066 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:08:15.066 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:15.066 Cannot find device "nvmf_init_br2" 00:08:15.066 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:08:15.066 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:15.066 Cannot find device "nvmf_tgt_br" 00:08:15.066 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:08:15.066 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:15.066 Cannot find device "nvmf_tgt_br2" 00:08:15.066 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:08:15.066 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:15.066 Cannot find device "nvmf_br" 00:08:15.066 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:08:15.066 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:15.327 Cannot find device "nvmf_init_if" 00:08:15.327 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:08:15.327 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:15.327 Cannot find device "nvmf_init_if2" 00:08:15.327 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:08:15.327 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:15.327 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:15.327 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:08:15.327 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:15.327 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:15.327 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:08:15.327 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:15.327 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:15.327 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:15.327 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:15.327 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:15.327 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:15.327 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:15.327 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:15.327 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:15.327 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:15.327 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:15.327 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:15.327 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:15.327 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:15.327 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:15.327 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:15.327 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:15.327 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:15.327 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:15.327 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:15.327 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:15.327 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:15.327 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:15.327 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:15.327 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:15.327 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:15.327 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:15.327 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:15.588 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:15.588 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:15.588 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:15.588 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:15.588 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:15.588 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:15.588 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:08:15.588 00:08:15.588 --- 10.0.0.3 ping statistics --- 00:08:15.588 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:15.588 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:08:15.588 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:15.588 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:15.588 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:08:15.588 00:08:15.588 --- 10.0.0.4 ping statistics --- 00:08:15.588 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:15.588 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:08:15.588 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:15.588 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:15.588 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:08:15.588 00:08:15.588 --- 10.0.0.1 ping statistics --- 00:08:15.588 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:15.588 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:08:15.588 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:15.588 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:15.588 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:08:15.588 00:08:15.588 --- 10.0.0.2 ping statistics --- 00:08:15.588 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:15.588 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:08:15.588 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:15.588 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@461 -- # return 0 00:08:15.588 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:15.588 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:15.588 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:15.588 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:15.588 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:15.588 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:15.588 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:15.588 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:08:15.588 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:08:15.588 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:08:15.588 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:15.588 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:15.588 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:15.588 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:15.588 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@509 -- # nvmfpid=64791 00:08:15.588 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@510 -- # waitforlisten 64791 00:08:15.588 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # '[' -z 64791 ']' 00:08:15.588 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:15.588 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:15.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:15.588 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:15.588 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:15.588 13:47:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:15.588 [2024-12-06 13:47:14.847764] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:08:15.588 [2024-12-06 13:47:14.847846] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:15.857 [2024-12-06 13:47:15.001576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:15.857 [2024-12-06 13:47:15.064553] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:15.857 [2024-12-06 13:47:15.064631] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:15.857 [2024-12-06 13:47:15.064646] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:15.857 [2024-12-06 13:47:15.064657] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:15.857 [2024-12-06 13:47:15.064666] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:15.857 [2024-12-06 13:47:15.066129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:15.857 [2024-12-06 13:47:15.066228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:15.857 [2024-12-06 13:47:15.066355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:15.857 [2024-12-06 13:47:15.066361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.857 [2024-12-06 13:47:15.141047] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:15.857 13:47:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:15.857 13:47:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@868 -- # return 0 00:08:15.857 13:47:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:15.857 13:47:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:15.857 13:47:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:16.119 13:47:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:16.119 13:47:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:16.378 [2024-12-06 13:47:15.548696] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:16.378 13:47:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:08:16.637 Malloc0 00:08:16.637 13:47:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:08:16.637 13:47:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:16.895 13:47:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:17.155 [2024-12-06 13:47:16.427600] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:17.155 13:47:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:08:17.414 [2024-12-06 13:47:16.707846] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:08:17.414 13:47:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --hostid=cfa2def7-c8af-457f-82a0-b312efdea7f4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:08:17.673 13:47:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --hostid=cfa2def7-c8af-457f-82a0-b312efdea7f4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:08:17.673 13:47:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:08:17.673 13:47:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # local i=0 00:08:17.673 13:47:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:08:17.673 13:47:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:08:17.673 13:47:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # sleep 2 00:08:19.627 13:47:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:08:19.627 13:47:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:08:19.627 13:47:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:08:19.627 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:08:19.627 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:08:19.627 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # return 0 00:08:19.627 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:08:19.627 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:08:19.628 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:08:19.628 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:19.628 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:08:19.628 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:08:19.628 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:08:19.628 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:08:19.628 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:08:19.628 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:08:19.628 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:08:19.628 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:08:19.628 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:08:19.628 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:08:19.628 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:08:19.628 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:19.628 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:19.628 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:19.628 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:19.628 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:08:19.628 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:08:19.628 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:19.628 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:19.628 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:19.628 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:19.628 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:08:19.628 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=64879 00:08:19.628 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:08:19.628 13:47:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:08:19.886 [global] 00:08:19.886 thread=1 00:08:19.886 invalidate=1 00:08:19.886 rw=randrw 00:08:19.886 time_based=1 00:08:19.886 runtime=6 00:08:19.886 ioengine=libaio 00:08:19.886 direct=1 00:08:19.886 bs=4096 00:08:19.886 iodepth=128 00:08:19.886 norandommap=0 00:08:19.886 numjobs=1 00:08:19.886 00:08:19.886 verify_dump=1 00:08:19.886 verify_backlog=512 00:08:19.886 verify_state_save=0 00:08:19.886 do_verify=1 00:08:19.886 verify=crc32c-intel 00:08:19.886 [job0] 00:08:19.886 filename=/dev/nvme0n1 00:08:19.886 Could not set queue depth (nvme0n1) 00:08:19.886 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:19.886 fio-3.35 00:08:19.886 Starting 1 thread 00:08:20.820 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:08:21.077 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:08:21.334 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:08:21.334 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:08:21.334 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:21.334 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:21.334 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:21.334 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:21.334 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:08:21.334 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:08:21.334 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:21.334 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:21.334 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:21.334 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:21.334 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:08:21.592 13:47:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:08:21.851 13:47:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:08:21.851 13:47:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:08:21.851 13:47:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:21.851 13:47:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:21.851 13:47:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:21.851 13:47:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:21.851 13:47:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:08:21.851 13:47:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:08:21.851 13:47:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:21.851 13:47:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:21.851 13:47:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:21.851 13:47:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:21.851 13:47:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 64879 00:08:26.043 00:08:26.043 job0: (groupid=0, jobs=1): err= 0: pid=64900: Fri Dec 6 13:47:25 2024 00:08:26.043 read: IOPS=10.9k, BW=42.7MiB/s (44.8MB/s)(257MiB/6006msec) 00:08:26.043 slat (usec): min=3, max=7896, avg=53.62, stdev=216.01 00:08:26.043 clat (usec): min=1611, max=15982, avg=7918.41, stdev=1484.48 00:08:26.043 lat (usec): min=1621, max=15994, avg=7972.03, stdev=1489.42 00:08:26.043 clat percentiles (usec): 00:08:26.043 | 1.00th=[ 4080], 5.00th=[ 5866], 10.00th=[ 6587], 20.00th=[ 7046], 00:08:26.043 | 30.00th=[ 7308], 40.00th=[ 7504], 50.00th=[ 7701], 60.00th=[ 7963], 00:08:26.043 | 70.00th=[ 8225], 80.00th=[ 8717], 90.00th=[ 9765], 95.00th=[11076], 00:08:26.043 | 99.00th=[12518], 99.50th=[13042], 99.90th=[13829], 99.95th=[14615], 00:08:26.043 | 99.99th=[15926] 00:08:26.043 bw ( KiB/s): min= 8872, max=28680, per=53.30%, avg=23316.91, stdev=6078.84, samples=11 00:08:26.043 iops : min= 2218, max= 7170, avg=5829.18, stdev=1519.73, samples=11 00:08:26.043 write: IOPS=6454, BW=25.2MiB/s (26.4MB/s)(136MiB/5412msec); 0 zone resets 00:08:26.043 slat (usec): min=15, max=3316, avg=61.53, stdev=149.68 00:08:26.043 clat (usec): min=1331, max=15818, avg=6916.58, stdev=1331.87 00:08:26.043 lat (usec): min=1418, max=15840, avg=6978.11, stdev=1337.61 00:08:26.043 clat percentiles (usec): 00:08:26.043 | 1.00th=[ 3195], 5.00th=[ 4080], 10.00th=[ 5276], 20.00th=[ 6259], 00:08:26.043 | 30.00th=[ 6521], 40.00th=[ 6783], 50.00th=[ 6980], 60.00th=[ 7177], 00:08:26.043 | 70.00th=[ 7439], 80.00th=[ 7767], 90.00th=[ 8356], 95.00th=[ 8848], 00:08:26.043 | 99.00th=[10552], 99.50th=[11338], 99.90th=[12387], 99.95th=[12518], 00:08:26.043 | 99.99th=[13435] 00:08:26.043 bw ( KiB/s): min= 9144, max=28560, per=90.35%, avg=23328.64, stdev=5829.55, samples=11 00:08:26.043 iops : min= 2286, max= 7140, avg=5831.91, stdev=1457.30, samples=11 00:08:26.043 lat (msec) : 2=0.03%, 4=2.08%, 10=91.62%, 20=6.27% 00:08:26.043 cpu : usr=5.28%, sys=22.61%, ctx=5807, majf=0, minf=90 00:08:26.043 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:08:26.043 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:26.043 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:26.043 issued rwts: total=65689,34932,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:26.043 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:26.043 00:08:26.043 Run status group 0 (all jobs): 00:08:26.043 READ: bw=42.7MiB/s (44.8MB/s), 42.7MiB/s-42.7MiB/s (44.8MB/s-44.8MB/s), io=257MiB (269MB), run=6006-6006msec 00:08:26.043 WRITE: bw=25.2MiB/s (26.4MB/s), 25.2MiB/s-25.2MiB/s (26.4MB/s-26.4MB/s), io=136MiB (143MB), run=5412-5412msec 00:08:26.043 00:08:26.043 Disk stats (read/write): 00:08:26.043 nvme0n1: ios=64716/34292, merge=0/0, ticks=489737/222792, in_queue=712529, util=98.62% 00:08:26.043 13:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:08:26.301 13:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:08:26.562 13:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:08:26.562 13:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:08:26.562 13:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:26.562 13:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:26.562 13:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:26.562 13:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:26.562 13:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:08:26.562 13:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:08:26.562 13:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:26.562 13:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:26.562 13:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:26.562 13:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:26.562 13:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:08:26.562 13:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=64976 00:08:26.562 13:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:08:26.562 13:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:08:26.562 [global] 00:08:26.562 thread=1 00:08:26.562 invalidate=1 00:08:26.562 rw=randrw 00:08:26.562 time_based=1 00:08:26.562 runtime=6 00:08:26.562 ioengine=libaio 00:08:26.562 direct=1 00:08:26.562 bs=4096 00:08:26.562 iodepth=128 00:08:26.562 norandommap=0 00:08:26.562 numjobs=1 00:08:26.562 00:08:26.562 verify_dump=1 00:08:26.562 verify_backlog=512 00:08:26.562 verify_state_save=0 00:08:26.562 do_verify=1 00:08:26.562 verify=crc32c-intel 00:08:26.841 [job0] 00:08:26.841 filename=/dev/nvme0n1 00:08:26.841 Could not set queue depth (nvme0n1) 00:08:26.841 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:26.841 fio-3.35 00:08:26.841 Starting 1 thread 00:08:27.787 13:47:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:08:28.045 13:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:08:28.304 13:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:08:28.304 13:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:08:28.304 13:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:28.304 13:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:28.304 13:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:28.304 13:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:28.304 13:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:08:28.304 13:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:08:28.304 13:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:28.304 13:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:28.304 13:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:28.304 13:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:28.304 13:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:08:28.563 13:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:08:28.822 13:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:08:28.822 13:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:08:28.822 13:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:28.822 13:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:28.822 13:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:28.822 13:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:28.822 13:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:08:28.822 13:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:08:28.822 13:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:28.823 13:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:28.823 13:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:28.823 13:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:28.823 13:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 64976 00:08:33.017 00:08:33.017 job0: (groupid=0, jobs=1): err= 0: pid=64997: Fri Dec 6 13:47:32 2024 00:08:33.017 read: IOPS=11.3k, BW=44.2MiB/s (46.4MB/s)(266MiB/6003msec) 00:08:33.017 slat (usec): min=7, max=5944, avg=44.70, stdev=187.03 00:08:33.017 clat (usec): min=270, max=25162, avg=7713.65, stdev=2725.08 00:08:33.017 lat (usec): min=282, max=25172, avg=7758.36, stdev=2727.21 00:08:33.017 clat percentiles (usec): 00:08:33.017 | 1.00th=[ 1074], 5.00th=[ 2737], 10.00th=[ 4883], 20.00th=[ 6652], 00:08:33.017 | 30.00th=[ 6980], 40.00th=[ 7242], 50.00th=[ 7439], 60.00th=[ 7701], 00:08:33.017 | 70.00th=[ 8029], 80.00th=[ 8586], 90.00th=[10945], 95.00th=[13042], 00:08:33.017 | 99.00th=[16581], 99.50th=[18220], 99.90th=[21627], 99.95th=[22938], 00:08:33.017 | 99.99th=[23987] 00:08:33.017 bw ( KiB/s): min= 3448, max=29768, per=51.56%, avg=23354.18, stdev=8742.84, samples=11 00:08:33.017 iops : min= 862, max= 7442, avg=5838.55, stdev=2185.71, samples=11 00:08:33.017 write: IOPS=6949, BW=27.1MiB/s (28.5MB/s)(138MiB/5083msec); 0 zone resets 00:08:33.017 slat (usec): min=15, max=1663, avg=53.59, stdev=126.10 00:08:33.017 clat (usec): min=224, max=19435, avg=6732.66, stdev=2403.87 00:08:33.017 lat (usec): min=252, max=19463, avg=6786.24, stdev=2406.53 00:08:33.017 clat percentiles (usec): 00:08:33.017 | 1.00th=[ 963], 5.00th=[ 1975], 10.00th=[ 3195], 20.00th=[ 5866], 00:08:33.017 | 30.00th=[ 6325], 40.00th=[ 6587], 50.00th=[ 6849], 60.00th=[ 7046], 00:08:33.017 | 70.00th=[ 7308], 80.00th=[ 7832], 90.00th=[ 9372], 95.00th=[11076], 00:08:33.017 | 99.00th=[13304], 99.50th=[13960], 99.90th=[17433], 99.95th=[18220], 00:08:33.017 | 99.99th=[19268] 00:08:33.017 bw ( KiB/s): min= 3752, max=29624, per=84.18%, avg=23401.45, stdev=8546.35, samples=11 00:08:33.017 iops : min= 938, max= 7406, avg=5850.36, stdev=2136.59, samples=11 00:08:33.017 lat (usec) : 250=0.01%, 500=0.10%, 750=0.32%, 1000=0.51% 00:08:33.017 lat (msec) : 2=2.75%, 4=6.13%, 10=78.73%, 20=11.34%, 50=0.12% 00:08:33.017 cpu : usr=5.76%, sys=23.16%, ctx=6563, majf=0, minf=145 00:08:33.017 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:08:33.017 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:33.017 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:33.017 issued rwts: total=67975,35326,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:33.017 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:33.017 00:08:33.017 Run status group 0 (all jobs): 00:08:33.017 READ: bw=44.2MiB/s (46.4MB/s), 44.2MiB/s-44.2MiB/s (46.4MB/s-46.4MB/s), io=266MiB (278MB), run=6003-6003msec 00:08:33.017 WRITE: bw=27.1MiB/s (28.5MB/s), 27.1MiB/s-27.1MiB/s (28.5MB/s-28.5MB/s), io=138MiB (145MB), run=5083-5083msec 00:08:33.017 00:08:33.017 Disk stats (read/write): 00:08:33.017 nvme0n1: ios=66810/34814, merge=0/0, ticks=494746/221070, in_queue=715816, util=98.66% 00:08:33.017 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:33.017 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:08:33.017 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:33.017 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # local i=0 00:08:33.017 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:08:33.017 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:33.017 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:08:33.017 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:33.017 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1235 -- # return 0 00:08:33.017 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:33.276 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:08:33.276 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:08:33.276 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:08:33.276 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:08:33.276 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:33.276 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:33.535 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:33.535 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:33.535 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:33.535 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:33.535 rmmod nvme_tcp 00:08:33.535 rmmod nvme_fabrics 00:08:33.535 rmmod nvme_keyring 00:08:33.535 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:33.535 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:33.535 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:33.535 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n 64791 ']' 00:08:33.535 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # killprocess 64791 00:08:33.535 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # '[' -z 64791 ']' 00:08:33.535 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # kill -0 64791 00:08:33.535 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # uname 00:08:33.535 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:33.535 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64791 00:08:33.535 killing process with pid 64791 00:08:33.536 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:33.536 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:33.536 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64791' 00:08:33.536 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@973 -- # kill 64791 00:08:33.536 13:47:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@978 -- # wait 64791 00:08:33.795 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:33.795 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:33.795 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:33.795 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:33.795 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:33.795 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:33.795 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:33.795 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:33.795 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:33.795 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:33.795 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:33.795 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:33.795 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:33.795 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:33.795 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:33.795 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:33.795 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:33.795 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:34.055 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:34.055 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:34.055 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:34.055 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:34.055 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:34.055 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:34.055 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:34.055 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:34.055 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:08:34.055 ************************************ 00:08:34.055 END TEST nvmf_target_multipath 00:08:34.055 ************************************ 00:08:34.055 00:08:34.055 real 0m19.227s 00:08:34.055 user 1m11.008s 00:08:34.055 sys 0m9.357s 00:08:34.055 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:34.055 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:34.055 13:47:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:34.055 13:47:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:34.055 13:47:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:34.055 13:47:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:34.055 ************************************ 00:08:34.055 START TEST nvmf_zcopy 00:08:34.055 ************************************ 00:08:34.055 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:34.315 * Looking for test storage... 00:08:34.315 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:34.315 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:34.315 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:08:34.315 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:34.315 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:34.315 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:34.315 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:34.315 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:34.315 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:08:34.315 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:08:34.315 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:08:34.315 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:08:34.315 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:08:34.315 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:08:34.315 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:08:34.315 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:34.315 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:08:34.315 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:08:34.315 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:34.316 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:34.316 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:08:34.316 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:08:34.316 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:34.316 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:08:34.316 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:08:34.316 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:08:34.316 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:08:34.316 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:34.316 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:08:34.316 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:08:34.316 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:34.316 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:34.316 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:08:34.316 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:34.316 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:34.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.316 --rc genhtml_branch_coverage=1 00:08:34.316 --rc genhtml_function_coverage=1 00:08:34.316 --rc genhtml_legend=1 00:08:34.316 --rc geninfo_all_blocks=1 00:08:34.316 --rc geninfo_unexecuted_blocks=1 00:08:34.316 00:08:34.316 ' 00:08:34.316 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:34.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.316 --rc genhtml_branch_coverage=1 00:08:34.316 --rc genhtml_function_coverage=1 00:08:34.316 --rc genhtml_legend=1 00:08:34.316 --rc geninfo_all_blocks=1 00:08:34.316 --rc geninfo_unexecuted_blocks=1 00:08:34.316 00:08:34.316 ' 00:08:34.316 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:34.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.316 --rc genhtml_branch_coverage=1 00:08:34.316 --rc genhtml_function_coverage=1 00:08:34.316 --rc genhtml_legend=1 00:08:34.316 --rc geninfo_all_blocks=1 00:08:34.316 --rc geninfo_unexecuted_blocks=1 00:08:34.316 00:08:34.316 ' 00:08:34.316 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:34.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.316 --rc genhtml_branch_coverage=1 00:08:34.316 --rc genhtml_function_coverage=1 00:08:34.316 --rc genhtml_legend=1 00:08:34.316 --rc geninfo_all_blocks=1 00:08:34.316 --rc geninfo_unexecuted_blocks=1 00:08:34.316 00:08:34.316 ' 00:08:34.316 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:34.316 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:34.316 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:34.316 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:34.316 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:34.316 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:34.316 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:34.316 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:34.316 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:34.316 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:34.316 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:34.316 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:34.316 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:08:34.316 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=cfa2def7-c8af-457f-82a0-b312efdea7f4 00:08:34.316 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:34.316 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:34.316 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:34.316 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:34.316 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:34.316 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:08:34.316 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:34.316 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:34.316 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:34.316 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.316 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.316 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.316 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:34.316 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.316 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:08:34.316 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:34.316 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:34.316 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:34.316 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:34.316 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:34.316 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:34.316 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:34.317 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:34.317 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:34.317 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:34.317 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:34.317 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:34.317 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:34.317 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:34.317 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:34.317 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:34.317 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:34.317 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:34.317 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:34.317 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:34.317 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:34.317 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:34.317 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:34.317 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:34.317 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:34.317 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:34.317 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:34.317 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:34.317 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:34.317 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:34.317 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:34.317 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:34.317 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:34.317 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:34.317 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:34.317 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:34.317 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:34.317 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:34.317 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:34.317 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:34.317 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:34.317 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:34.317 Cannot find device "nvmf_init_br" 00:08:34.317 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:08:34.317 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:34.317 Cannot find device "nvmf_init_br2" 00:08:34.317 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:08:34.317 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:34.317 Cannot find device "nvmf_tgt_br" 00:08:34.317 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:08:34.317 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:34.317 Cannot find device "nvmf_tgt_br2" 00:08:34.317 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:08:34.317 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:34.317 Cannot find device "nvmf_init_br" 00:08:34.317 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:08:34.317 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:34.576 Cannot find device "nvmf_init_br2" 00:08:34.576 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:08:34.576 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:34.576 Cannot find device "nvmf_tgt_br" 00:08:34.576 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:08:34.576 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:34.576 Cannot find device "nvmf_tgt_br2" 00:08:34.576 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:08:34.576 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:34.576 Cannot find device "nvmf_br" 00:08:34.576 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:08:34.576 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:34.576 Cannot find device "nvmf_init_if" 00:08:34.576 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:08:34.576 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:34.576 Cannot find device "nvmf_init_if2" 00:08:34.576 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:08:34.576 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:34.576 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:34.576 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:08:34.576 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:34.576 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:34.576 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:08:34.576 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:34.576 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:34.576 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:34.576 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:34.576 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:34.576 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:34.576 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:34.576 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:34.576 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:34.576 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:34.576 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:34.576 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:34.576 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:34.576 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:34.576 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:34.576 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:34.576 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:34.576 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:34.576 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:34.576 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:34.576 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:34.576 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:34.576 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:34.835 13:47:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:34.835 13:47:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:34.835 13:47:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:34.836 13:47:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:34.836 13:47:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:34.836 13:47:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:34.836 13:47:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:34.836 13:47:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:34.836 13:47:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:34.836 13:47:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:34.836 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:34.836 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.122 ms 00:08:34.836 00:08:34.836 --- 10.0.0.3 ping statistics --- 00:08:34.836 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:34.836 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:08:34.836 13:47:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:34.836 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:34.836 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.063 ms 00:08:34.836 00:08:34.836 --- 10.0.0.4 ping statistics --- 00:08:34.836 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:34.836 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:08:34.836 13:47:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:34.836 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:34.836 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.047 ms 00:08:34.836 00:08:34.836 --- 10.0.0.1 ping statistics --- 00:08:34.836 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:34.836 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:08:34.836 13:47:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:34.836 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:34.836 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:08:34.836 00:08:34.836 --- 10.0.0.2 ping statistics --- 00:08:34.836 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:34.836 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:08:34.836 13:47:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:34.836 13:47:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@461 -- # return 0 00:08:34.836 13:47:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:34.836 13:47:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:34.836 13:47:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:34.836 13:47:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:34.836 13:47:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:34.836 13:47:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:34.836 13:47:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:34.836 13:47:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:34.836 13:47:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:34.836 13:47:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:34.836 13:47:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:34.836 13:47:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=65304 00:08:34.836 13:47:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:34.836 13:47:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 65304 00:08:34.836 13:47:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 65304 ']' 00:08:34.836 13:47:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:34.836 13:47:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:34.836 13:47:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:34.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:34.836 13:47:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:34.836 13:47:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:34.836 [2024-12-06 13:47:34.151224] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:08:34.836 [2024-12-06 13:47:34.151321] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:35.095 [2024-12-06 13:47:34.304097] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.096 [2024-12-06 13:47:34.374531] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:35.096 [2024-12-06 13:47:34.374760] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:35.096 [2024-12-06 13:47:34.374916] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:35.096 [2024-12-06 13:47:34.374980] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:35.096 [2024-12-06 13:47:34.375164] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:35.096 [2024-12-06 13:47:34.375814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:35.096 [2024-12-06 13:47:34.447731] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:36.032 13:47:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:36.032 13:47:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:08:36.032 13:47:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:36.032 13:47:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:36.032 13:47:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:36.032 13:47:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:36.032 13:47:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:36.032 13:47:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:36.032 13:47:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.032 13:47:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:36.032 [2024-12-06 13:47:35.185637] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:36.032 13:47:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.032 13:47:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:36.032 13:47:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.032 13:47:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:36.032 13:47:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.032 13:47:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:36.032 13:47:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.032 13:47:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:36.032 [2024-12-06 13:47:35.205691] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:36.032 13:47:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.032 13:47:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:36.032 13:47:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.032 13:47:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:36.032 13:47:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.032 13:47:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:36.032 13:47:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.032 13:47:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:36.032 malloc0 00:08:36.032 13:47:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.032 13:47:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:36.032 13:47:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.032 13:47:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:36.032 13:47:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.032 13:47:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:36.032 13:47:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:36.032 13:47:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:36.032 13:47:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:36.032 13:47:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:36.032 13:47:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:36.032 { 00:08:36.032 "params": { 00:08:36.032 "name": "Nvme$subsystem", 00:08:36.032 "trtype": "$TEST_TRANSPORT", 00:08:36.032 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:36.032 "adrfam": "ipv4", 00:08:36.032 "trsvcid": "$NVMF_PORT", 00:08:36.032 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:36.032 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:36.032 "hdgst": ${hdgst:-false}, 00:08:36.032 "ddgst": ${ddgst:-false} 00:08:36.032 }, 00:08:36.032 "method": "bdev_nvme_attach_controller" 00:08:36.032 } 00:08:36.032 EOF 00:08:36.032 )") 00:08:36.032 13:47:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:36.032 13:47:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:36.032 13:47:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:36.032 13:47:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:36.032 "params": { 00:08:36.032 "name": "Nvme1", 00:08:36.032 "trtype": "tcp", 00:08:36.032 "traddr": "10.0.0.3", 00:08:36.032 "adrfam": "ipv4", 00:08:36.032 "trsvcid": "4420", 00:08:36.032 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:36.032 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:36.032 "hdgst": false, 00:08:36.032 "ddgst": false 00:08:36.032 }, 00:08:36.032 "method": "bdev_nvme_attach_controller" 00:08:36.032 }' 00:08:36.032 [2024-12-06 13:47:35.307060] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:08:36.032 [2024-12-06 13:47:35.307600] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65337 ] 00:08:36.290 [2024-12-06 13:47:35.463530] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.290 [2024-12-06 13:47:35.527890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.290 [2024-12-06 13:47:35.611775] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:36.549 Running I/O for 10 seconds... 00:08:38.421 6882.00 IOPS, 53.77 MiB/s [2024-12-06T13:47:38.773Z] 7015.50 IOPS, 54.81 MiB/s [2024-12-06T13:47:40.147Z] 7001.00 IOPS, 54.70 MiB/s [2024-12-06T13:47:41.080Z] 7039.50 IOPS, 55.00 MiB/s [2024-12-06T13:47:42.066Z] 7098.00 IOPS, 55.45 MiB/s [2024-12-06T13:47:43.003Z] 7129.67 IOPS, 55.70 MiB/s [2024-12-06T13:47:43.940Z] 7138.14 IOPS, 55.77 MiB/s [2024-12-06T13:47:44.878Z] 7152.62 IOPS, 55.88 MiB/s [2024-12-06T13:47:45.815Z] 7175.00 IOPS, 56.05 MiB/s [2024-12-06T13:47:45.815Z] 7181.20 IOPS, 56.10 MiB/s 00:08:46.411 Latency(us) 00:08:46.411 [2024-12-06T13:47:45.815Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:46.411 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:08:46.411 Verification LBA range: start 0x0 length 0x1000 00:08:46.411 Nvme1n1 : 10.01 7184.88 56.13 0.00 0.00 17761.35 351.88 27405.96 00:08:46.411 [2024-12-06T13:47:45.815Z] =================================================================================================================== 00:08:46.411 [2024-12-06T13:47:45.815Z] Total : 7184.88 56.13 0.00 0.00 17761.35 351.88 27405.96 00:08:46.671 13:47:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=65460 00:08:46.671 13:47:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:08:46.671 13:47:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:46.671 13:47:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:08:46.671 13:47:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:08:46.671 13:47:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:46.671 13:47:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:46.671 13:47:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:46.671 13:47:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:46.671 { 00:08:46.671 "params": { 00:08:46.671 "name": "Nvme$subsystem", 00:08:46.671 "trtype": "$TEST_TRANSPORT", 00:08:46.671 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:46.671 "adrfam": "ipv4", 00:08:46.671 "trsvcid": "$NVMF_PORT", 00:08:46.671 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:46.671 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:46.671 "hdgst": ${hdgst:-false}, 00:08:46.671 "ddgst": ${ddgst:-false} 00:08:46.671 }, 00:08:46.671 "method": "bdev_nvme_attach_controller" 00:08:46.671 } 00:08:46.671 EOF 00:08:46.671 )") 00:08:46.671 13:47:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:46.671 [2024-12-06 13:47:46.014811] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.671 [2024-12-06 13:47:46.014999] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.671 13:47:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:46.671 13:47:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:46.671 13:47:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:46.671 "params": { 00:08:46.671 "name": "Nvme1", 00:08:46.671 "trtype": "tcp", 00:08:46.671 "traddr": "10.0.0.3", 00:08:46.671 "adrfam": "ipv4", 00:08:46.671 "trsvcid": "4420", 00:08:46.671 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:46.671 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:46.671 "hdgst": false, 00:08:46.671 "ddgst": false 00:08:46.671 }, 00:08:46.671 "method": "bdev_nvme_attach_controller" 00:08:46.671 }' 00:08:46.671 [2024-12-06 13:47:46.026743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.671 [2024-12-06 13:47:46.026775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.671 [2024-12-06 13:47:46.038739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.671 [2024-12-06 13:47:46.038783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.671 [2024-12-06 13:47:46.050755] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.671 [2024-12-06 13:47:46.050784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.671 [2024-12-06 13:47:46.062770] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.671 [2024-12-06 13:47:46.062798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.931 [2024-12-06 13:47:46.074549] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:08:46.931 [2024-12-06 13:47:46.074799] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.931 [2024-12-06 13:47:46.074826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.931 [2024-12-06 13:47:46.075269] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65460 ] 00:08:46.931 [2024-12-06 13:47:46.086751] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.931 [2024-12-06 13:47:46.086779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.931 [2024-12-06 13:47:46.098745] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.931 [2024-12-06 13:47:46.098770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.931 [2024-12-06 13:47:46.110746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.931 [2024-12-06 13:47:46.110771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.931 [2024-12-06 13:47:46.122753] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.931 [2024-12-06 13:47:46.122793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.931 [2024-12-06 13:47:46.134753] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.931 [2024-12-06 13:47:46.134777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.931 [2024-12-06 13:47:46.146755] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.931 [2024-12-06 13:47:46.146780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.931 [2024-12-06 13:47:46.158758] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.931 [2024-12-06 13:47:46.158782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.931 [2024-12-06 13:47:46.170762] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.931 [2024-12-06 13:47:46.170802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.931 [2024-12-06 13:47:46.182761] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.931 [2024-12-06 13:47:46.182786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.931 [2024-12-06 13:47:46.194784] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.931 [2024-12-06 13:47:46.194812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.931 [2024-12-06 13:47:46.206779] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.931 [2024-12-06 13:47:46.206954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.931 [2024-12-06 13:47:46.215407] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.931 [2024-12-06 13:47:46.218775] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.931 [2024-12-06 13:47:46.218803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.931 [2024-12-06 13:47:46.230785] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.931 [2024-12-06 13:47:46.230821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.931 [2024-12-06 13:47:46.242777] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.931 [2024-12-06 13:47:46.242944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.931 [2024-12-06 13:47:46.254781] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.931 [2024-12-06 13:47:46.254808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.931 [2024-12-06 13:47:46.266779] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.931 [2024-12-06 13:47:46.266804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.931 [2024-12-06 13:47:46.267421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.931 [2024-12-06 13:47:46.278782] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.931 [2024-12-06 13:47:46.278808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.931 [2024-12-06 13:47:46.290790] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.931 [2024-12-06 13:47:46.290819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.931 [2024-12-06 13:47:46.302797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.931 [2024-12-06 13:47:46.302829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.931 [2024-12-06 13:47:46.314802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.931 [2024-12-06 13:47:46.314832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.931 [2024-12-06 13:47:46.326804] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.931 [2024-12-06 13:47:46.326832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.190 [2024-12-06 13:47:46.338805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.190 [2024-12-06 13:47:46.338833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.190 [2024-12-06 13:47:46.348594] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:47.191 [2024-12-06 13:47:46.350804] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.191 [2024-12-06 13:47:46.350830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.191 [2024-12-06 13:47:46.362806] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.191 [2024-12-06 13:47:46.362832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.191 [2024-12-06 13:47:46.374811] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.191 [2024-12-06 13:47:46.374995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.191 [2024-12-06 13:47:46.386817] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.191 [2024-12-06 13:47:46.386845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.191 [2024-12-06 13:47:46.398809] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.191 [2024-12-06 13:47:46.398836] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.191 [2024-12-06 13:47:46.410830] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.191 [2024-12-06 13:47:46.410865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.191 [2024-12-06 13:47:46.422838] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.191 [2024-12-06 13:47:46.422868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.191 [2024-12-06 13:47:46.434846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.191 [2024-12-06 13:47:46.434876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.191 [2024-12-06 13:47:46.446854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.191 [2024-12-06 13:47:46.447035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.191 [2024-12-06 13:47:46.458875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.191 [2024-12-06 13:47:46.458904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.191 [2024-12-06 13:47:46.470871] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.191 [2024-12-06 13:47:46.470903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.191 Running I/O for 5 seconds... 00:08:47.191 [2024-12-06 13:47:46.486557] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.191 [2024-12-06 13:47:46.486590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.191 [2024-12-06 13:47:46.502726] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.191 [2024-12-06 13:47:46.502759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.191 [2024-12-06 13:47:46.519213] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.191 [2024-12-06 13:47:46.519244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.191 [2024-12-06 13:47:46.536866] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.191 [2024-12-06 13:47:46.537030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.191 [2024-12-06 13:47:46.551276] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.191 [2024-12-06 13:47:46.551308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.191 [2024-12-06 13:47:46.567597] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.191 [2024-12-06 13:47:46.567655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.191 [2024-12-06 13:47:46.583930] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.191 [2024-12-06 13:47:46.584002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.451 [2024-12-06 13:47:46.601074] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.451 [2024-12-06 13:47:46.601133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.451 [2024-12-06 13:47:46.617772] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.451 [2024-12-06 13:47:46.617802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.451 [2024-12-06 13:47:46.635016] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.451 [2024-12-06 13:47:46.635202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.451 [2024-12-06 13:47:46.651850] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.451 [2024-12-06 13:47:46.651885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.451 [2024-12-06 13:47:46.668077] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.451 [2024-12-06 13:47:46.668144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.451 [2024-12-06 13:47:46.684743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.451 [2024-12-06 13:47:46.684791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.451 [2024-12-06 13:47:46.700834] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.451 [2024-12-06 13:47:46.700866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.451 [2024-12-06 13:47:46.718525] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.451 [2024-12-06 13:47:46.718695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.451 [2024-12-06 13:47:46.732234] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.451 [2024-12-06 13:47:46.732267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.451 [2024-12-06 13:47:46.747124] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.451 [2024-12-06 13:47:46.747165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.451 [2024-12-06 13:47:46.762915] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.451 [2024-12-06 13:47:46.762948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.451 [2024-12-06 13:47:46.779395] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.451 [2024-12-06 13:47:46.779428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.451 [2024-12-06 13:47:46.795970] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.451 [2024-12-06 13:47:46.796004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.451 [2024-12-06 13:47:46.812284] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.451 [2024-12-06 13:47:46.812316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.451 [2024-12-06 13:47:46.828713] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.451 [2024-12-06 13:47:46.828747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.451 [2024-12-06 13:47:46.839584] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.451 [2024-12-06 13:47:46.839616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.711 [2024-12-06 13:47:46.856104] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.711 [2024-12-06 13:47:46.856165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.711 [2024-12-06 13:47:46.872690] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.711 [2024-12-06 13:47:46.872722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.711 [2024-12-06 13:47:46.888821] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.711 [2024-12-06 13:47:46.888854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.711 [2024-12-06 13:47:46.905303] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.711 [2024-12-06 13:47:46.905334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.711 [2024-12-06 13:47:46.922182] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.711 [2024-12-06 13:47:46.922225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.711 [2024-12-06 13:47:46.938144] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.711 [2024-12-06 13:47:46.938173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.711 [2024-12-06 13:47:46.954927] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.711 [2024-12-06 13:47:46.954960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.711 [2024-12-06 13:47:46.971800] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.711 [2024-12-06 13:47:46.971984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.711 [2024-12-06 13:47:46.988359] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.711 [2024-12-06 13:47:46.988391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.711 [2024-12-06 13:47:47.004601] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.711 [2024-12-06 13:47:47.004632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.711 [2024-12-06 13:47:47.016530] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.711 [2024-12-06 13:47:47.016562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.711 [2024-12-06 13:47:47.032005] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.711 [2024-12-06 13:47:47.032040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.711 [2024-12-06 13:47:47.048840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.711 [2024-12-06 13:47:47.048872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.711 [2024-12-06 13:47:47.065311] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.711 [2024-12-06 13:47:47.065343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.711 [2024-12-06 13:47:47.081506] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.711 [2024-12-06 13:47:47.081537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.711 [2024-12-06 13:47:47.097942] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.711 [2024-12-06 13:47:47.097974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.711 [2024-12-06 13:47:47.108380] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.711 [2024-12-06 13:47:47.108428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.970 [2024-12-06 13:47:47.123858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.970 [2024-12-06 13:47:47.123894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.970 [2024-12-06 13:47:47.140965] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.970 [2024-12-06 13:47:47.140999] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.970 [2024-12-06 13:47:47.157893] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.970 [2024-12-06 13:47:47.158063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.970 [2024-12-06 13:47:47.175068] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.970 [2024-12-06 13:47:47.175145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.970 [2024-12-06 13:47:47.190951] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.970 [2024-12-06 13:47:47.190985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.970 [2024-12-06 13:47:47.201956] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.970 [2024-12-06 13:47:47.201988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.970 [2024-12-06 13:47:47.217864] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.970 [2024-12-06 13:47:47.217897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.970 [2024-12-06 13:47:47.234715] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.970 [2024-12-06 13:47:47.234765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.970 [2024-12-06 13:47:47.250049] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.970 [2024-12-06 13:47:47.250081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.970 [2024-12-06 13:47:47.261240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.970 [2024-12-06 13:47:47.261271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.970 [2024-12-06 13:47:47.276758] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.970 [2024-12-06 13:47:47.276804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.970 [2024-12-06 13:47:47.292882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.971 [2024-12-06 13:47:47.292930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.971 [2024-12-06 13:47:47.309994] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.971 [2024-12-06 13:47:47.310027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.971 [2024-12-06 13:47:47.326465] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.971 [2024-12-06 13:47:47.326498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.971 [2024-12-06 13:47:47.342922] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.971 [2024-12-06 13:47:47.342954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.971 [2024-12-06 13:47:47.359339] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.971 [2024-12-06 13:47:47.359370] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.230 [2024-12-06 13:47:47.377274] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.230 [2024-12-06 13:47:47.377305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.230 [2024-12-06 13:47:47.392925] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.230 [2024-12-06 13:47:47.392958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.230 [2024-12-06 13:47:47.409867] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.230 [2024-12-06 13:47:47.409899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.230 [2024-12-06 13:47:47.426190] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.230 [2024-12-06 13:47:47.426222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.230 [2024-12-06 13:47:47.442760] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.230 [2024-12-06 13:47:47.442794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.230 [2024-12-06 13:47:47.459094] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.230 [2024-12-06 13:47:47.459136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.230 [2024-12-06 13:47:47.470825] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.230 [2024-12-06 13:47:47.470857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.230 13889.00 IOPS, 108.51 MiB/s [2024-12-06T13:47:47.634Z] [2024-12-06 13:47:47.486196] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.230 [2024-12-06 13:47:47.486228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.230 [2024-12-06 13:47:47.502665] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.230 [2024-12-06 13:47:47.502696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.230 [2024-12-06 13:47:47.518765] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.230 [2024-12-06 13:47:47.518797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.230 [2024-12-06 13:47:47.535151] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.230 [2024-12-06 13:47:47.535181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.230 [2024-12-06 13:47:47.551195] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.230 [2024-12-06 13:47:47.551227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.230 [2024-12-06 13:47:47.561683] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.230 [2024-12-06 13:47:47.561849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.230 [2024-12-06 13:47:47.577344] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.230 [2024-12-06 13:47:47.577377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.230 [2024-12-06 13:47:47.593799] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.230 [2024-12-06 13:47:47.593831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.230 [2024-12-06 13:47:47.610057] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.231 [2024-12-06 13:47:47.610089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.231 [2024-12-06 13:47:47.622276] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.231 [2024-12-06 13:47:47.622308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.490 [2024-12-06 13:47:47.638782] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.490 [2024-12-06 13:47:47.638816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.490 [2024-12-06 13:47:47.655008] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.490 [2024-12-06 13:47:47.655041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.490 [2024-12-06 13:47:47.672819] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.490 [2024-12-06 13:47:47.672986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.490 [2024-12-06 13:47:47.689256] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.490 [2024-12-06 13:47:47.689288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.490 [2024-12-06 13:47:47.705926] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.490 [2024-12-06 13:47:47.705960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.490 [2024-12-06 13:47:47.722305] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.490 [2024-12-06 13:47:47.722337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.490 [2024-12-06 13:47:47.738812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.490 [2024-12-06 13:47:47.738844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.490 [2024-12-06 13:47:47.756440] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.490 [2024-12-06 13:47:47.756490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.490 [2024-12-06 13:47:47.773002] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.490 [2024-12-06 13:47:47.773035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.490 [2024-12-06 13:47:47.789579] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.490 [2024-12-06 13:47:47.789612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.490 [2024-12-06 13:47:47.806356] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.490 [2024-12-06 13:47:47.806393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.490 [2024-12-06 13:47:47.822421] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.490 [2024-12-06 13:47:47.822462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.490 [2024-12-06 13:47:47.839094] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.490 [2024-12-06 13:47:47.839137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.490 [2024-12-06 13:47:47.855959] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.490 [2024-12-06 13:47:47.855992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.490 [2024-12-06 13:47:47.871786] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.490 [2024-12-06 13:47:47.871820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.490 [2024-12-06 13:47:47.883171] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.490 [2024-12-06 13:47:47.883209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.750 [2024-12-06 13:47:47.899582] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.750 [2024-12-06 13:47:47.899615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.750 [2024-12-06 13:47:47.915742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.750 [2024-12-06 13:47:47.915776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.750 [2024-12-06 13:47:47.932273] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.750 [2024-12-06 13:47:47.932306] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.750 [2024-12-06 13:47:47.948880] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.750 [2024-12-06 13:47:47.948912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.750 [2024-12-06 13:47:47.965513] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.750 [2024-12-06 13:47:47.965675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.750 [2024-12-06 13:47:47.981592] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.750 [2024-12-06 13:47:47.981624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.750 [2024-12-06 13:47:47.998122] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.750 [2024-12-06 13:47:47.998164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.750 [2024-12-06 13:47:48.014405] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.750 [2024-12-06 13:47:48.014436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.750 [2024-12-06 13:47:48.024836] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.750 [2024-12-06 13:47:48.024997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.750 [2024-12-06 13:47:48.040039] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.750 [2024-12-06 13:47:48.040226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.750 [2024-12-06 13:47:48.056811] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.750 [2024-12-06 13:47:48.056845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.750 [2024-12-06 13:47:48.072967] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.750 [2024-12-06 13:47:48.073000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.750 [2024-12-06 13:47:48.083768] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.750 [2024-12-06 13:47:48.083950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.750 [2024-12-06 13:47:48.098979] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.750 [2024-12-06 13:47:48.099155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.750 [2024-12-06 13:47:48.115439] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.750 [2024-12-06 13:47:48.115471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.750 [2024-12-06 13:47:48.131512] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.750 [2024-12-06 13:47:48.131544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.750 [2024-12-06 13:47:48.143031] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.750 [2024-12-06 13:47:48.143205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.010 [2024-12-06 13:47:48.158815] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.010 [2024-12-06 13:47:48.158980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.010 [2024-12-06 13:47:48.175831] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.010 [2024-12-06 13:47:48.175866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.010 [2024-12-06 13:47:48.192258] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.010 [2024-12-06 13:47:48.192290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.010 [2024-12-06 13:47:48.209160] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.010 [2024-12-06 13:47:48.209192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.010 [2024-12-06 13:47:48.226073] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.010 [2024-12-06 13:47:48.226144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.010 [2024-12-06 13:47:48.241580] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.010 [2024-12-06 13:47:48.241615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.010 [2024-12-06 13:47:48.256839] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.010 [2024-12-06 13:47:48.257003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.010 [2024-12-06 13:47:48.274624] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.010 [2024-12-06 13:47:48.274659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.010 [2024-12-06 13:47:48.291572] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.010 [2024-12-06 13:47:48.291785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.010 [2024-12-06 13:47:48.308411] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.010 [2024-12-06 13:47:48.308444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.010 [2024-12-06 13:47:48.323948] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.010 [2024-12-06 13:47:48.323998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.010 [2024-12-06 13:47:48.338519] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.010 [2024-12-06 13:47:48.338679] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.010 [2024-12-06 13:47:48.349393] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.010 [2024-12-06 13:47:48.349559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.010 [2024-12-06 13:47:48.365127] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.010 [2024-12-06 13:47:48.365159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.010 [2024-12-06 13:47:48.382294] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.010 [2024-12-06 13:47:48.382328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.010 [2024-12-06 13:47:48.397077] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.010 [2024-12-06 13:47:48.397256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.010 [2024-12-06 13:47:48.408024] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.010 [2024-12-06 13:47:48.408061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.270 [2024-12-06 13:47:48.423259] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.270 [2024-12-06 13:47:48.423291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.270 [2024-12-06 13:47:48.439615] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.270 [2024-12-06 13:47:48.439673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.270 [2024-12-06 13:47:48.456282] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.270 [2024-12-06 13:47:48.456313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.270 [2024-12-06 13:47:48.472928] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.270 [2024-12-06 13:47:48.472960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.270 13998.50 IOPS, 109.36 MiB/s [2024-12-06T13:47:48.674Z] [2024-12-06 13:47:48.490580] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.270 [2024-12-06 13:47:48.490614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.270 [2024-12-06 13:47:48.507136] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.270 [2024-12-06 13:47:48.507168] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.270 [2024-12-06 13:47:48.524013] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.270 [2024-12-06 13:47:48.524222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.270 [2024-12-06 13:47:48.541231] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.270 [2024-12-06 13:47:48.541264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.270 [2024-12-06 13:47:48.557890] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.270 [2024-12-06 13:47:48.557924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.270 [2024-12-06 13:47:48.575223] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.270 [2024-12-06 13:47:48.575255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.270 [2024-12-06 13:47:48.591602] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.270 [2024-12-06 13:47:48.591661] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.270 [2024-12-06 13:47:48.608711] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.270 [2024-12-06 13:47:48.608743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.270 [2024-12-06 13:47:48.625474] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.270 [2024-12-06 13:47:48.625658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.270 [2024-12-06 13:47:48.641525] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.270 [2024-12-06 13:47:48.641558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.270 [2024-12-06 13:47:48.658624] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.270 [2024-12-06 13:47:48.658657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.530 [2024-12-06 13:47:48.676507] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.530 [2024-12-06 13:47:48.676683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.530 [2024-12-06 13:47:48.691400] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.530 [2024-12-06 13:47:48.691598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.530 [2024-12-06 13:47:48.702487] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.530 [2024-12-06 13:47:48.702666] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.530 [2024-12-06 13:47:48.718388] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.530 [2024-12-06 13:47:48.718423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.530 [2024-12-06 13:47:48.735203] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.530 [2024-12-06 13:47:48.735235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.530 [2024-12-06 13:47:48.751992] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.530 [2024-12-06 13:47:48.752025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.530 [2024-12-06 13:47:48.767945] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.530 [2024-12-06 13:47:48.767979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.530 [2024-12-06 13:47:48.784632] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.530 [2024-12-06 13:47:48.784664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.530 [2024-12-06 13:47:48.801456] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.530 [2024-12-06 13:47:48.801639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.530 [2024-12-06 13:47:48.817268] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.530 [2024-12-06 13:47:48.817300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.530 [2024-12-06 13:47:48.834542] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.530 [2024-12-06 13:47:48.834576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.530 [2024-12-06 13:47:48.851476] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.530 [2024-12-06 13:47:48.851509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.530 [2024-12-06 13:47:48.868321] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.530 [2024-12-06 13:47:48.868367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.530 [2024-12-06 13:47:48.885600] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.530 [2024-12-06 13:47:48.885633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.530 [2024-12-06 13:47:48.902108] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.530 [2024-12-06 13:47:48.902212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.530 [2024-12-06 13:47:48.917883] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.530 [2024-12-06 13:47:48.917916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.530 [2024-12-06 13:47:48.929073] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.530 [2024-12-06 13:47:48.929131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.790 [2024-12-06 13:47:48.944928] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.790 [2024-12-06 13:47:48.944962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.790 [2024-12-06 13:47:48.961612] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.790 [2024-12-06 13:47:48.961645] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.790 [2024-12-06 13:47:48.978807] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.790 [2024-12-06 13:47:48.978840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.790 [2024-12-06 13:47:48.995607] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.790 [2024-12-06 13:47:48.995803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.790 [2024-12-06 13:47:49.012458] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.790 [2024-12-06 13:47:49.012620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.790 [2024-12-06 13:47:49.028623] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.790 [2024-12-06 13:47:49.028783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.790 [2024-12-06 13:47:49.045800] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.790 [2024-12-06 13:47:49.045961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.790 [2024-12-06 13:47:49.062625] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.790 [2024-12-06 13:47:49.062787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.790 [2024-12-06 13:47:49.079367] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.790 [2024-12-06 13:47:49.079528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.790 [2024-12-06 13:47:49.095873] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.790 [2024-12-06 13:47:49.096080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.790 [2024-12-06 13:47:49.112972] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.790 [2024-12-06 13:47:49.113144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.790 [2024-12-06 13:47:49.130204] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.790 [2024-12-06 13:47:49.130378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.790 [2024-12-06 13:47:49.147001] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.790 [2024-12-06 13:47:49.147180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.790 [2024-12-06 13:47:49.164589] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.790 [2024-12-06 13:47:49.164750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.790 [2024-12-06 13:47:49.179972] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.790 [2024-12-06 13:47:49.180161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.050 [2024-12-06 13:47:49.197501] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.050 [2024-12-06 13:47:49.197661] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.050 [2024-12-06 13:47:49.213639] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.050 [2024-12-06 13:47:49.213802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.050 [2024-12-06 13:47:49.230331] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.050 [2024-12-06 13:47:49.230509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.050 [2024-12-06 13:47:49.247319] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.050 [2024-12-06 13:47:49.247483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.050 [2024-12-06 13:47:49.263209] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.050 [2024-12-06 13:47:49.263387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.050 [2024-12-06 13:47:49.280856] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.050 [2024-12-06 13:47:49.281013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.050 [2024-12-06 13:47:49.297209] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.050 [2024-12-06 13:47:49.297386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.050 [2024-12-06 13:47:49.313442] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.050 [2024-12-06 13:47:49.313607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.050 [2024-12-06 13:47:49.331267] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.050 [2024-12-06 13:47:49.331446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.050 [2024-12-06 13:47:49.346004] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.050 [2024-12-06 13:47:49.346198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.050 [2024-12-06 13:47:49.361850] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.050 [2024-12-06 13:47:49.362029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.050 [2024-12-06 13:47:49.378071] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.050 [2024-12-06 13:47:49.378263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.050 [2024-12-06 13:47:49.395816] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.050 [2024-12-06 13:47:49.395984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.050 [2024-12-06 13:47:49.411083] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.050 [2024-12-06 13:47:49.411280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.050 [2024-12-06 13:47:49.422346] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.050 [2024-12-06 13:47:49.422545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.050 [2024-12-06 13:47:49.438100] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.050 [2024-12-06 13:47:49.438289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.310 [2024-12-06 13:47:49.456230] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.310 [2024-12-06 13:47:49.456442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.310 [2024-12-06 13:47:49.473692] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.310 [2024-12-06 13:47:49.473882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.310 13872.33 IOPS, 108.38 MiB/s [2024-12-06T13:47:49.714Z] [2024-12-06 13:47:49.489026] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.310 [2024-12-06 13:47:49.489222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.310 [2024-12-06 13:47:49.500082] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.310 [2024-12-06 13:47:49.500290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.310 [2024-12-06 13:47:49.515489] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.310 [2024-12-06 13:47:49.515695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.310 [2024-12-06 13:47:49.532633] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.310 [2024-12-06 13:47:49.532806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.310 [2024-12-06 13:47:49.548944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.310 [2024-12-06 13:47:49.549141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.310 [2024-12-06 13:47:49.565726] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.310 [2024-12-06 13:47:49.565898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.310 [2024-12-06 13:47:49.582181] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.310 [2024-12-06 13:47:49.582353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.310 [2024-12-06 13:47:49.598803] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.310 [2024-12-06 13:47:49.598975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.310 [2024-12-06 13:47:49.615135] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.310 [2024-12-06 13:47:49.615314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.310 [2024-12-06 13:47:49.632305] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.310 [2024-12-06 13:47:49.632477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.310 [2024-12-06 13:47:49.648643] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.310 [2024-12-06 13:47:49.648817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.310 [2024-12-06 13:47:49.664882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.310 [2024-12-06 13:47:49.665054] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.310 [2024-12-06 13:47:49.675443] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.310 [2024-12-06 13:47:49.675617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.310 [2024-12-06 13:47:49.691509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.310 [2024-12-06 13:47:49.691723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.310 [2024-12-06 13:47:49.708712] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.310 [2024-12-06 13:47:49.708891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.569 [2024-12-06 13:47:49.725236] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.570 [2024-12-06 13:47:49.725414] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.570 [2024-12-06 13:47:49.741830] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.570 [2024-12-06 13:47:49.742005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.570 [2024-12-06 13:47:49.758254] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.570 [2024-12-06 13:47:49.758426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.570 [2024-12-06 13:47:49.774742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.570 [2024-12-06 13:47:49.774915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.570 [2024-12-06 13:47:49.790810] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.570 [2024-12-06 13:47:49.790981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.570 [2024-12-06 13:47:49.806390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.570 [2024-12-06 13:47:49.806423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.570 [2024-12-06 13:47:49.822515] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.570 [2024-12-06 13:47:49.822548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.570 [2024-12-06 13:47:49.839886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.570 [2024-12-06 13:47:49.839924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.570 [2024-12-06 13:47:49.856450] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.570 [2024-12-06 13:47:49.856627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.570 [2024-12-06 13:47:49.872692] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.570 [2024-12-06 13:47:49.872725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.570 [2024-12-06 13:47:49.889319] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.570 [2024-12-06 13:47:49.889352] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.570 [2024-12-06 13:47:49.906364] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.570 [2024-12-06 13:47:49.906398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.570 [2024-12-06 13:47:49.922817] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.570 [2024-12-06 13:47:49.922866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.570 [2024-12-06 13:47:49.939444] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.570 [2024-12-06 13:47:49.939476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.570 [2024-12-06 13:47:49.956350] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.570 [2024-12-06 13:47:49.956381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.829 [2024-12-06 13:47:49.973683] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.829 [2024-12-06 13:47:49.973730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.829 [2024-12-06 13:47:49.989451] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.829 [2024-12-06 13:47:49.989498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.829 [2024-12-06 13:47:50.005472] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.829 [2024-12-06 13:47:50.005518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.829 [2024-12-06 13:47:50.016887] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.829 [2024-12-06 13:47:50.016935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.829 [2024-12-06 13:47:50.032802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.829 [2024-12-06 13:47:50.032850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.829 [2024-12-06 13:47:50.048787] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.829 [2024-12-06 13:47:50.048835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.829 [2024-12-06 13:47:50.063250] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.829 [2024-12-06 13:47:50.063297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.829 [2024-12-06 13:47:50.079216] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.829 [2024-12-06 13:47:50.079262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.829 [2024-12-06 13:47:50.094591] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.829 [2024-12-06 13:47:50.094638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.829 [2024-12-06 13:47:50.110306] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.829 [2024-12-06 13:47:50.110353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.829 [2024-12-06 13:47:50.126711] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.829 [2024-12-06 13:47:50.126759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.829 [2024-12-06 13:47:50.143338] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.829 [2024-12-06 13:47:50.143385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.829 [2024-12-06 13:47:50.159113] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.829 [2024-12-06 13:47:50.159159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.829 [2024-12-06 13:47:50.170951] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.829 [2024-12-06 13:47:50.170998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.829 [2024-12-06 13:47:50.186131] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.829 [2024-12-06 13:47:50.186178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.829 [2024-12-06 13:47:50.202558] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.829 [2024-12-06 13:47:50.202606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.829 [2024-12-06 13:47:50.218368] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.829 [2024-12-06 13:47:50.218414] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.089 [2024-12-06 13:47:50.235163] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.089 [2024-12-06 13:47:50.235221] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.089 [2024-12-06 13:47:50.251334] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.089 [2024-12-06 13:47:50.251382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.089 [2024-12-06 13:47:50.267751] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.089 [2024-12-06 13:47:50.267801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.089 [2024-12-06 13:47:50.283863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.089 [2024-12-06 13:47:50.283913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.089 [2024-12-06 13:47:50.294672] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.089 [2024-12-06 13:47:50.294721] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.089 [2024-12-06 13:47:50.311708] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.089 [2024-12-06 13:47:50.311758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.089 [2024-12-06 13:47:50.326797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.089 [2024-12-06 13:47:50.326843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.089 [2024-12-06 13:47:50.338501] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.089 [2024-12-06 13:47:50.338548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.089 [2024-12-06 13:47:50.353913] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.089 [2024-12-06 13:47:50.353961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.089 [2024-12-06 13:47:50.369870] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.089 [2024-12-06 13:47:50.369917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.089 [2024-12-06 13:47:50.381022] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.089 [2024-12-06 13:47:50.381078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.089 [2024-12-06 13:47:50.396458] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.089 [2024-12-06 13:47:50.396507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.089 [2024-12-06 13:47:50.413936] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.089 [2024-12-06 13:47:50.413985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.089 [2024-12-06 13:47:50.430264] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.089 [2024-12-06 13:47:50.430311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.089 [2024-12-06 13:47:50.446583] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.089 [2024-12-06 13:47:50.446631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.089 [2024-12-06 13:47:50.463248] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.089 [2024-12-06 13:47:50.463294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.089 13975.25 IOPS, 109.18 MiB/s [2024-12-06T13:47:50.493Z] [2024-12-06 13:47:50.479442] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.089 [2024-12-06 13:47:50.479488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.349 [2024-12-06 13:47:50.496935] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.349 [2024-12-06 13:47:50.496983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.349 [2024-12-06 13:47:50.513758] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.349 [2024-12-06 13:47:50.513806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.349 [2024-12-06 13:47:50.529769] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.349 [2024-12-06 13:47:50.529815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.349 [2024-12-06 13:47:50.546320] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.349 [2024-12-06 13:47:50.546367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.349 [2024-12-06 13:47:50.562324] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.349 [2024-12-06 13:47:50.562372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.349 [2024-12-06 13:47:50.577568] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.349 [2024-12-06 13:47:50.577615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.349 [2024-12-06 13:47:50.593571] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.349 [2024-12-06 13:47:50.593618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.349 [2024-12-06 13:47:50.608835] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.349 [2024-12-06 13:47:50.608882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.349 [2024-12-06 13:47:50.625355] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.349 [2024-12-06 13:47:50.625402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.349 [2024-12-06 13:47:50.641321] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.349 [2024-12-06 13:47:50.641368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.349 [2024-12-06 13:47:50.653371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.349 [2024-12-06 13:47:50.653421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.349 [2024-12-06 13:47:50.668745] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.350 [2024-12-06 13:47:50.668793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.350 [2024-12-06 13:47:50.679959] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.350 [2024-12-06 13:47:50.680008] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.350 [2024-12-06 13:47:50.695822] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.350 [2024-12-06 13:47:50.695870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.350 [2024-12-06 13:47:50.712542] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.350 [2024-12-06 13:47:50.712590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.350 [2024-12-06 13:47:50.729537] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.350 [2024-12-06 13:47:50.729586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.350 [2024-12-06 13:47:50.745657] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.350 [2024-12-06 13:47:50.745705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.609 [2024-12-06 13:47:50.762877] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.609 [2024-12-06 13:47:50.762924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.609 [2024-12-06 13:47:50.778935] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.609 [2024-12-06 13:47:50.778984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.609 [2024-12-06 13:47:50.794703] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.609 [2024-12-06 13:47:50.794751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.609 [2024-12-06 13:47:50.809306] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.609 [2024-12-06 13:47:50.809353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.609 [2024-12-06 13:47:50.821136] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.609 [2024-12-06 13:47:50.821182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.609 [2024-12-06 13:47:50.837271] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.609 [2024-12-06 13:47:50.837319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.609 [2024-12-06 13:47:50.854975] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.609 [2024-12-06 13:47:50.855024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.609 [2024-12-06 13:47:50.870386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.609 [2024-12-06 13:47:50.870435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.609 [2024-12-06 13:47:50.887143] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.609 [2024-12-06 13:47:50.887190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.609 [2024-12-06 13:47:50.903374] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.609 [2024-12-06 13:47:50.903421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.609 [2024-12-06 13:47:50.913941] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.609 [2024-12-06 13:47:50.913988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.609 [2024-12-06 13:47:50.929951] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.609 [2024-12-06 13:47:50.930000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.609 [2024-12-06 13:47:50.945845] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.609 [2024-12-06 13:47:50.945894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.609 [2024-12-06 13:47:50.956373] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.609 [2024-12-06 13:47:50.956420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.609 [2024-12-06 13:47:50.971371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.609 [2024-12-06 13:47:50.971418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.609 [2024-12-06 13:47:50.989103] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.609 [2024-12-06 13:47:50.989174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.609 [2024-12-06 13:47:51.004941] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.609 [2024-12-06 13:47:51.004975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.869 [2024-12-06 13:47:51.022027] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.869 [2024-12-06 13:47:51.022075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.869 [2024-12-06 13:47:51.039652] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.869 [2024-12-06 13:47:51.039701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.869 [2024-12-06 13:47:51.053411] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.869 [2024-12-06 13:47:51.053459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.869 [2024-12-06 13:47:51.069223] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.869 [2024-12-06 13:47:51.069268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.869 [2024-12-06 13:47:51.085426] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.869 [2024-12-06 13:47:51.085474] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.869 [2024-12-06 13:47:51.101212] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.869 [2024-12-06 13:47:51.101258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.869 [2024-12-06 13:47:51.116737] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.869 [2024-12-06 13:47:51.116785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.869 [2024-12-06 13:47:51.132633] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.869 [2024-12-06 13:47:51.132680] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.869 [2024-12-06 13:47:51.144383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.869 [2024-12-06 13:47:51.144429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.869 [2024-12-06 13:47:51.160112] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.869 [2024-12-06 13:47:51.160169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.869 [2024-12-06 13:47:51.175887] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.869 [2024-12-06 13:47:51.175935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.869 [2024-12-06 13:47:51.192306] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.869 [2024-12-06 13:47:51.192355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.869 [2024-12-06 13:47:51.208433] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.869 [2024-12-06 13:47:51.208480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.869 [2024-12-06 13:47:51.222998] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.869 [2024-12-06 13:47:51.223044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.869 [2024-12-06 13:47:51.239267] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.869 [2024-12-06 13:47:51.239314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.869 [2024-12-06 13:47:51.255219] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.869 [2024-12-06 13:47:51.255265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.869 [2024-12-06 13:47:51.266637] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.869 [2024-12-06 13:47:51.266684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.129 [2024-12-06 13:47:51.281967] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.129 [2024-12-06 13:47:51.282014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.129 [2024-12-06 13:47:51.292470] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.129 [2024-12-06 13:47:51.292518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.129 [2024-12-06 13:47:51.308444] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.129 [2024-12-06 13:47:51.308491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.129 [2024-12-06 13:47:51.325413] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.129 [2024-12-06 13:47:51.325463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.129 [2024-12-06 13:47:51.342093] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.129 [2024-12-06 13:47:51.342151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.129 [2024-12-06 13:47:51.358399] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.129 [2024-12-06 13:47:51.358446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.129 [2024-12-06 13:47:51.374486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.129 [2024-12-06 13:47:51.374533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.129 [2024-12-06 13:47:51.388471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.129 [2024-12-06 13:47:51.388518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.129 [2024-12-06 13:47:51.403621] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.129 [2024-12-06 13:47:51.403696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.129 [2024-12-06 13:47:51.419777] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.129 [2024-12-06 13:47:51.419826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.129 [2024-12-06 13:47:51.436231] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.129 [2024-12-06 13:47:51.436277] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.129 [2024-12-06 13:47:51.447483] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.129 [2024-12-06 13:47:51.447531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.129 [2024-12-06 13:47:51.464066] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.129 [2024-12-06 13:47:51.464137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.129 14050.20 IOPS, 109.77 MiB/s [2024-12-06T13:47:51.533Z] [2024-12-06 13:47:51.480627] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.129 [2024-12-06 13:47:51.480675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.129 00:08:52.129 Latency(us) 00:08:52.129 [2024-12-06T13:47:51.533Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:52.129 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:08:52.129 Nvme1n1 : 5.01 14050.13 109.77 0.00 0.00 9099.28 3723.64 16920.20 00:08:52.129 [2024-12-06T13:47:51.533Z] =================================================================================================================== 00:08:52.129 [2024-12-06T13:47:51.533Z] Total : 14050.13 109.77 0.00 0.00 9099.28 3723.64 16920.20 00:08:52.129 [2024-12-06 13:47:51.492174] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.129 [2024-12-06 13:47:51.492237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.129 [2024-12-06 13:47:51.504190] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.129 [2024-12-06 13:47:51.504236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.129 [2024-12-06 13:47:51.516179] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.129 [2024-12-06 13:47:51.516228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.129 [2024-12-06 13:47:51.528244] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.130 [2024-12-06 13:47:51.528313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.390 [2024-12-06 13:47:51.540200] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.390 [2024-12-06 13:47:51.540248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.390 [2024-12-06 13:47:51.552205] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.390 [2024-12-06 13:47:51.552252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.390 [2024-12-06 13:47:51.564214] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.390 [2024-12-06 13:47:51.564265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.390 [2024-12-06 13:47:51.576218] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.390 [2024-12-06 13:47:51.576266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.390 [2024-12-06 13:47:51.588221] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.390 [2024-12-06 13:47:51.588271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.390 [2024-12-06 13:47:51.600227] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.390 [2024-12-06 13:47:51.600278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.390 [2024-12-06 13:47:51.612236] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.390 [2024-12-06 13:47:51.612292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.390 [2024-12-06 13:47:51.624230] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.390 [2024-12-06 13:47:51.624280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.390 [2024-12-06 13:47:51.636232] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.390 [2024-12-06 13:47:51.636281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.390 [2024-12-06 13:47:51.648256] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.390 [2024-12-06 13:47:51.648303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.390 [2024-12-06 13:47:51.660245] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.390 [2024-12-06 13:47:51.660292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.390 [2024-12-06 13:47:51.672242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.390 [2024-12-06 13:47:51.672284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.390 [2024-12-06 13:47:51.684228] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.390 [2024-12-06 13:47:51.684272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.390 [2024-12-06 13:47:51.696225] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.390 [2024-12-06 13:47:51.696265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.390 [2024-12-06 13:47:51.708274] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.390 [2024-12-06 13:47:51.708337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.390 [2024-12-06 13:47:51.720240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.390 [2024-12-06 13:47:51.720281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.390 [2024-12-06 13:47:51.732249] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.390 [2024-12-06 13:47:51.732276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.390 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (65460) - No such process 00:08:52.390 13:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 65460 00:08:52.390 13:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:52.390 13:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.390 13:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:52.390 13:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.390 13:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:52.390 13:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.390 13:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:52.390 delay0 00:08:52.390 13:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.390 13:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:08:52.390 13:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.390 13:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:52.390 13:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.390 13:47:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:08:52.650 [2024-12-06 13:47:51.935994] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:00.784 Initializing NVMe Controllers 00:09:00.784 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:09:00.784 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:00.784 Initialization complete. Launching workers. 00:09:00.784 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 292, failed: 14526 00:09:00.784 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 14761, failed to submit 57 00:09:00.784 success 14625, unsuccessful 136, failed 0 00:09:00.784 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:00.784 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:00.784 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:00.784 13:47:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:00.784 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:00.784 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:00.784 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:00.784 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:00.784 rmmod nvme_tcp 00:09:00.784 rmmod nvme_fabrics 00:09:00.784 rmmod nvme_keyring 00:09:00.784 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:00.784 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:00.784 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:00.784 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 65304 ']' 00:09:00.784 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 65304 00:09:00.784 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 65304 ']' 00:09:00.784 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 65304 00:09:00.784 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:09:00.784 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:00.784 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65304 00:09:00.784 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:00.784 killing process with pid 65304 00:09:00.784 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:00.784 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65304' 00:09:00.784 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 65304 00:09:00.784 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 65304 00:09:00.784 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:00.784 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:00.784 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:00.784 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:00.784 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:09:00.784 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:00.784 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:09:00.784 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:00.784 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:00.784 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:00.784 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:00.784 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:00.784 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:00.784 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:00.784 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:00.784 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:00.784 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:00.784 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:00.784 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:00.784 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:00.784 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:00.784 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:00.784 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:00.784 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:00.784 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:00.784 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:00.784 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:09:00.784 00:09:00.784 real 0m26.182s 00:09:00.784 user 0m42.076s 00:09:00.784 sys 0m7.594s 00:09:00.784 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:00.784 ************************************ 00:09:00.784 END TEST nvmf_zcopy 00:09:00.784 ************************************ 00:09:00.784 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:00.784 13:47:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:00.784 13:47:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:00.784 13:47:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:00.784 13:47:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:00.784 ************************************ 00:09:00.784 START TEST nvmf_nmic 00:09:00.784 ************************************ 00:09:00.784 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:00.784 * Looking for test storage... 00:09:00.784 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:00.784 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:00.784 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:09:00.784 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:00.784 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:00.784 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:00.784 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:00.784 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:00.785 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:00.785 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:00.785 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:00.785 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:00.785 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:00.785 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:00.785 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:00.785 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:00.785 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:00.785 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:00.785 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:00.785 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:00.785 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:00.785 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:00.785 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:00.785 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:00.785 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:00.785 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:00.785 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:00.785 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:00.785 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:00.785 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:00.785 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:00.785 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:00.785 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:00.785 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:00.785 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:00.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.785 --rc genhtml_branch_coverage=1 00:09:00.785 --rc genhtml_function_coverage=1 00:09:00.785 --rc genhtml_legend=1 00:09:00.785 --rc geninfo_all_blocks=1 00:09:00.785 --rc geninfo_unexecuted_blocks=1 00:09:00.785 00:09:00.785 ' 00:09:00.785 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:00.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.785 --rc genhtml_branch_coverage=1 00:09:00.785 --rc genhtml_function_coverage=1 00:09:00.785 --rc genhtml_legend=1 00:09:00.785 --rc geninfo_all_blocks=1 00:09:00.785 --rc geninfo_unexecuted_blocks=1 00:09:00.785 00:09:00.785 ' 00:09:00.785 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:00.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.785 --rc genhtml_branch_coverage=1 00:09:00.785 --rc genhtml_function_coverage=1 00:09:00.785 --rc genhtml_legend=1 00:09:00.785 --rc geninfo_all_blocks=1 00:09:00.785 --rc geninfo_unexecuted_blocks=1 00:09:00.785 00:09:00.785 ' 00:09:00.785 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:00.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.785 --rc genhtml_branch_coverage=1 00:09:00.785 --rc genhtml_function_coverage=1 00:09:00.785 --rc genhtml_legend=1 00:09:00.785 --rc geninfo_all_blocks=1 00:09:00.785 --rc geninfo_unexecuted_blocks=1 00:09:00.785 00:09:00.785 ' 00:09:00.785 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:00.785 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:00.785 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:00.785 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:00.785 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:00.785 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:00.785 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:00.785 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:00.785 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:00.785 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:00.785 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:00.785 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:00.785 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:09:00.785 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=cfa2def7-c8af-457f-82a0-b312efdea7f4 00:09:00.785 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:00.785 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:00.785 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:00.785 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:00.785 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:00.785 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:00.785 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:00.785 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:00.785 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:00.785 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.785 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.785 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.785 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:00.786 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.786 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:00.786 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:00.786 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:00.786 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:00.786 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:00.786 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:00.786 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:00.786 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:00.786 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:00.786 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:00.786 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:00.786 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:00.786 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:00.786 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:00.786 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:00.786 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:00.786 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:00.786 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:00.786 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:00.786 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:00.786 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:00.786 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:00.786 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:00.786 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:00.786 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:00.786 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:00.786 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:00.786 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:00.786 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:00.786 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:00.786 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:00.786 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:00.786 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:00.786 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:00.786 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:00.786 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:00.786 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:00.786 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:00.786 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:00.786 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:00.786 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:00.786 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:00.786 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:00.786 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:00.786 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:00.786 Cannot find device "nvmf_init_br" 00:09:00.786 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:09:00.786 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:00.786 Cannot find device "nvmf_init_br2" 00:09:00.786 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:09:00.786 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:00.786 Cannot find device "nvmf_tgt_br" 00:09:00.786 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:09:00.786 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:00.786 Cannot find device "nvmf_tgt_br2" 00:09:00.786 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:09:00.786 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:00.786 Cannot find device "nvmf_init_br" 00:09:00.786 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:09:00.786 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:00.786 Cannot find device "nvmf_init_br2" 00:09:00.786 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:09:00.786 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:00.786 Cannot find device "nvmf_tgt_br" 00:09:00.786 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:09:00.786 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:00.786 Cannot find device "nvmf_tgt_br2" 00:09:00.786 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:09:00.786 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:00.786 Cannot find device "nvmf_br" 00:09:00.786 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:09:00.786 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:00.786 Cannot find device "nvmf_init_if" 00:09:00.786 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:09:00.786 13:47:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:00.786 Cannot find device "nvmf_init_if2" 00:09:00.786 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:09:00.786 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:00.786 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:00.786 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:09:00.786 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:00.786 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:00.786 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:09:00.786 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:00.786 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:00.786 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:00.786 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:00.786 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:00.786 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:00.786 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:00.786 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:00.787 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:00.787 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:00.787 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:00.787 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:00.787 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:00.787 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:00.787 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:00.787 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:00.787 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:01.055 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:01.055 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:01.056 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:01.056 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:01.056 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:01.056 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:01.056 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:01.056 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:01.056 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:01.056 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:01.056 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:01.056 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:01.056 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:01.056 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:01.056 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:01.056 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:01.056 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:01.056 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:09:01.056 00:09:01.056 --- 10.0.0.3 ping statistics --- 00:09:01.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:01.056 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:09:01.056 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:01.056 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:01.056 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.079 ms 00:09:01.056 00:09:01.056 --- 10.0.0.4 ping statistics --- 00:09:01.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:01.056 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:09:01.056 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:01.056 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:01.056 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.051 ms 00:09:01.056 00:09:01.056 --- 10.0.0.1 ping statistics --- 00:09:01.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:01.056 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:09:01.056 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:01.056 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:01.056 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:09:01.056 00:09:01.056 --- 10.0.0.2 ping statistics --- 00:09:01.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:01.056 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:09:01.056 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:01.056 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@461 -- # return 0 00:09:01.056 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:01.056 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:01.056 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:01.056 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:01.056 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:01.056 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:01.056 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:01.056 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:01.056 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:01.056 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:01.056 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:01.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:01.056 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=65847 00:09:01.056 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:01.056 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 65847 00:09:01.056 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 65847 ']' 00:09:01.056 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:01.056 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:01.056 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:01.056 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:01.056 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:01.056 [2024-12-06 13:48:00.374288] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:09:01.056 [2024-12-06 13:48:00.374389] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:01.315 [2024-12-06 13:48:00.527420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:01.315 [2024-12-06 13:48:00.591402] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:01.315 [2024-12-06 13:48:00.591751] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:01.315 [2024-12-06 13:48:00.591922] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:01.315 [2024-12-06 13:48:00.591994] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:01.315 [2024-12-06 13:48:00.592121] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:01.315 [2024-12-06 13:48:00.593621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:01.315 [2024-12-06 13:48:00.593773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:01.315 [2024-12-06 13:48:00.593859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.315 [2024-12-06 13:48:00.593858] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:01.315 [2024-12-06 13:48:00.668840] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:01.577 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:01.577 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:09:01.577 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:01.577 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:01.577 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:01.577 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:01.577 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:01.577 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.577 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:01.577 [2024-12-06 13:48:00.801960] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:01.577 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.577 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:01.577 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.577 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:01.577 Malloc0 00:09:01.577 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.577 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:01.577 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.577 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:01.577 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.577 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:01.577 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.577 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:01.577 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.577 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:01.577 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.577 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:01.577 [2024-12-06 13:48:00.880850] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:01.577 test case1: single bdev can't be used in multiple subsystems 00:09:01.577 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.577 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:01.577 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:01.577 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.577 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:01.577 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.577 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:09:01.577 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.577 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:01.577 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.577 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:01.577 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:01.577 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.577 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:01.577 [2024-12-06 13:48:00.904672] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:01.577 [2024-12-06 13:48:00.904705] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:01.577 [2024-12-06 13:48:00.904716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.577 request: 00:09:01.577 { 00:09:01.577 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:01.577 "namespace": { 00:09:01.577 "bdev_name": "Malloc0", 00:09:01.577 "no_auto_visible": false, 00:09:01.577 "hide_metadata": false 00:09:01.577 }, 00:09:01.577 "method": "nvmf_subsystem_add_ns", 00:09:01.577 "req_id": 1 00:09:01.577 } 00:09:01.577 Got JSON-RPC error response 00:09:01.577 response: 00:09:01.577 { 00:09:01.577 "code": -32602, 00:09:01.577 "message": "Invalid parameters" 00:09:01.577 } 00:09:01.577 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:01.577 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:01.577 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:01.577 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:01.577 Adding namespace failed - expected result. 00:09:01.577 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:01.577 test case2: host connect to nvmf target in multiple paths 00:09:01.577 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:09:01.577 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.577 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:01.577 [2024-12-06 13:48:00.920785] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:09:01.577 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.577 13:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --hostid=cfa2def7-c8af-457f-82a0-b312efdea7f4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:09:01.845 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --hostid=cfa2def7-c8af-457f-82a0-b312efdea7f4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:09:01.845 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:01.845 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:09:01.845 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:01.845 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:01.845 13:48:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:09:04.380 13:48:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:04.380 13:48:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:04.380 13:48:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:04.380 13:48:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:04.380 13:48:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:04.380 13:48:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:09:04.380 13:48:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:04.380 [global] 00:09:04.380 thread=1 00:09:04.380 invalidate=1 00:09:04.380 rw=write 00:09:04.380 time_based=1 00:09:04.380 runtime=1 00:09:04.380 ioengine=libaio 00:09:04.380 direct=1 00:09:04.380 bs=4096 00:09:04.380 iodepth=1 00:09:04.380 norandommap=0 00:09:04.380 numjobs=1 00:09:04.380 00:09:04.380 verify_dump=1 00:09:04.380 verify_backlog=512 00:09:04.380 verify_state_save=0 00:09:04.380 do_verify=1 00:09:04.380 verify=crc32c-intel 00:09:04.380 [job0] 00:09:04.380 filename=/dev/nvme0n1 00:09:04.380 Could not set queue depth (nvme0n1) 00:09:04.380 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:04.380 fio-3.35 00:09:04.380 Starting 1 thread 00:09:05.370 00:09:05.370 job0: (groupid=0, jobs=1): err= 0: pid=65931: Fri Dec 6 13:48:04 2024 00:09:05.370 read: IOPS=2727, BW=10.7MiB/s (11.2MB/s)(10.7MiB/1001msec) 00:09:05.370 slat (nsec): min=12195, max=59811, avg=15247.63, stdev=5470.59 00:09:05.370 clat (usec): min=122, max=490, avg=190.15, stdev=30.33 00:09:05.370 lat (usec): min=144, max=503, avg=205.40, stdev=31.08 00:09:05.370 clat percentiles (usec): 00:09:05.370 | 1.00th=[ 141], 5.00th=[ 149], 10.00th=[ 155], 20.00th=[ 165], 00:09:05.370 | 30.00th=[ 174], 40.00th=[ 180], 50.00th=[ 188], 60.00th=[ 194], 00:09:05.370 | 70.00th=[ 202], 80.00th=[ 212], 90.00th=[ 229], 95.00th=[ 243], 00:09:05.370 | 99.00th=[ 277], 99.50th=[ 285], 99.90th=[ 441], 99.95th=[ 478], 00:09:05.370 | 99.99th=[ 490] 00:09:05.370 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:09:05.370 slat (usec): min=17, max=115, avg=22.11, stdev= 7.65 00:09:05.370 clat (usec): min=76, max=2232, avg=118.02, stdev=47.09 00:09:05.370 lat (usec): min=95, max=2278, avg=140.13, stdev=48.39 00:09:05.370 clat percentiles (usec): 00:09:05.370 | 1.00th=[ 83], 5.00th=[ 88], 10.00th=[ 91], 20.00th=[ 97], 00:09:05.370 | 30.00th=[ 102], 40.00th=[ 108], 50.00th=[ 113], 60.00th=[ 119], 00:09:05.370 | 70.00th=[ 126], 80.00th=[ 135], 90.00th=[ 149], 95.00th=[ 159], 00:09:05.370 | 99.00th=[ 190], 99.50th=[ 212], 99.90th=[ 424], 99.95th=[ 619], 00:09:05.370 | 99.99th=[ 2245] 00:09:05.370 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:09:05.370 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:05.370 lat (usec) : 100=14.06%, 250=84.06%, 500=1.84%, 750=0.02% 00:09:05.370 lat (msec) : 4=0.02% 00:09:05.370 cpu : usr=2.00%, sys=8.40%, ctx=5802, majf=0, minf=5 00:09:05.370 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:05.370 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:05.370 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:05.370 issued rwts: total=2730,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:05.370 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:05.370 00:09:05.370 Run status group 0 (all jobs): 00:09:05.370 READ: bw=10.7MiB/s (11.2MB/s), 10.7MiB/s-10.7MiB/s (11.2MB/s-11.2MB/s), io=10.7MiB (11.2MB), run=1001-1001msec 00:09:05.370 WRITE: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:09:05.370 00:09:05.370 Disk stats (read/write): 00:09:05.370 nvme0n1: ios=2610/2602, merge=0/0, ticks=534/345, in_queue=879, util=91.48% 00:09:05.370 13:48:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:05.370 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:05.370 13:48:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:05.370 13:48:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:09:05.370 13:48:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:05.370 13:48:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:05.370 13:48:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:05.370 13:48:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:05.370 13:48:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:09:05.370 13:48:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:05.370 13:48:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:05.370 13:48:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:05.370 13:48:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:09:05.370 13:48:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:05.370 13:48:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:09:05.370 13:48:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:05.370 13:48:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:05.370 rmmod nvme_tcp 00:09:05.370 rmmod nvme_fabrics 00:09:05.370 rmmod nvme_keyring 00:09:05.370 13:48:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:05.370 13:48:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:09:05.370 13:48:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:09:05.370 13:48:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 65847 ']' 00:09:05.370 13:48:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 65847 00:09:05.370 13:48:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 65847 ']' 00:09:05.370 13:48:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 65847 00:09:05.370 13:48:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:09:05.370 13:48:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:05.370 13:48:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65847 00:09:05.370 killing process with pid 65847 00:09:05.370 13:48:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:05.370 13:48:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:05.370 13:48:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65847' 00:09:05.370 13:48:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 65847 00:09:05.370 13:48:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 65847 00:09:05.934 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:05.934 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:05.934 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:05.934 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:09:05.934 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:09:05.934 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:05.934 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:09:05.934 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:05.934 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:05.934 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:05.934 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:05.934 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:05.934 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:05.934 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:05.934 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:05.934 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:05.934 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:05.934 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:05.934 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:05.934 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:05.934 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:05.934 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:05.934 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:05.934 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:05.934 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:05.934 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:05.934 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:09:05.934 00:09:05.934 real 0m5.612s 00:09:05.934 user 0m16.781s 00:09:05.934 sys 0m2.016s 00:09:05.934 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:05.934 ************************************ 00:09:05.934 END TEST nvmf_nmic 00:09:05.934 ************************************ 00:09:05.934 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:05.934 13:48:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:05.934 13:48:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:05.934 13:48:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:05.934 13:48:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:05.934 ************************************ 00:09:05.934 START TEST nvmf_fio_target 00:09:05.934 ************************************ 00:09:05.934 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:06.192 * Looking for test storage... 00:09:06.192 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:06.192 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:06.192 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:09:06.192 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:06.192 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:06.192 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:06.192 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:06.192 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:06.192 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:06.192 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:06.192 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:06.192 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:06.192 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:06.192 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:06.192 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:06.192 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:06.192 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:09:06.192 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:09:06.192 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:06.192 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:06.192 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:09:06.192 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:09:06.192 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:06.192 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:09:06.192 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:06.192 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:09:06.192 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:09:06.192 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:06.192 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:09:06.192 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:06.192 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:06.192 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:06.192 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:09:06.192 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:06.192 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:06.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.192 --rc genhtml_branch_coverage=1 00:09:06.192 --rc genhtml_function_coverage=1 00:09:06.192 --rc genhtml_legend=1 00:09:06.192 --rc geninfo_all_blocks=1 00:09:06.192 --rc geninfo_unexecuted_blocks=1 00:09:06.192 00:09:06.192 ' 00:09:06.192 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:06.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.192 --rc genhtml_branch_coverage=1 00:09:06.192 --rc genhtml_function_coverage=1 00:09:06.192 --rc genhtml_legend=1 00:09:06.192 --rc geninfo_all_blocks=1 00:09:06.192 --rc geninfo_unexecuted_blocks=1 00:09:06.192 00:09:06.192 ' 00:09:06.192 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:06.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.192 --rc genhtml_branch_coverage=1 00:09:06.192 --rc genhtml_function_coverage=1 00:09:06.192 --rc genhtml_legend=1 00:09:06.192 --rc geninfo_all_blocks=1 00:09:06.192 --rc geninfo_unexecuted_blocks=1 00:09:06.192 00:09:06.192 ' 00:09:06.192 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:06.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.192 --rc genhtml_branch_coverage=1 00:09:06.192 --rc genhtml_function_coverage=1 00:09:06.192 --rc genhtml_legend=1 00:09:06.192 --rc geninfo_all_blocks=1 00:09:06.192 --rc geninfo_unexecuted_blocks=1 00:09:06.192 00:09:06.192 ' 00:09:06.192 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:06.192 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:06.192 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:06.192 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:06.192 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:06.192 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:06.192 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:06.192 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:06.192 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:06.192 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:06.192 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:06.192 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:06.192 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:09:06.192 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=cfa2def7-c8af-457f-82a0-b312efdea7f4 00:09:06.192 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:06.192 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:06.192 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:06.192 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:06.192 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:06.192 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:06.192 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:06.192 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:06.192 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:06.192 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.193 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.193 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.193 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:06.193 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.193 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:09:06.193 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:06.193 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:06.193 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:06.193 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:06.193 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:06.193 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:06.193 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:06.193 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:06.193 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:06.193 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:06.193 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:06.193 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:06.193 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:06.193 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:06.193 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:06.193 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:06.193 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:06.193 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:06.193 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:06.193 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:06.193 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:06.193 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:06.193 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:06.193 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:06.193 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:06.193 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:06.193 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:06.193 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:06.193 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:06.193 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:06.193 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:06.193 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:06.193 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:06.193 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:06.193 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:06.193 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:06.193 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:06.193 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:06.193 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:06.193 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:06.193 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:06.193 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:06.193 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:06.193 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:06.193 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:06.193 Cannot find device "nvmf_init_br" 00:09:06.193 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:09:06.193 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:06.193 Cannot find device "nvmf_init_br2" 00:09:06.193 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:09:06.193 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:06.193 Cannot find device "nvmf_tgt_br" 00:09:06.193 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:09:06.193 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:06.193 Cannot find device "nvmf_tgt_br2" 00:09:06.193 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:09:06.193 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:06.193 Cannot find device "nvmf_init_br" 00:09:06.193 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:09:06.193 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:06.450 Cannot find device "nvmf_init_br2" 00:09:06.450 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:09:06.450 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:06.450 Cannot find device "nvmf_tgt_br" 00:09:06.450 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:09:06.451 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:06.451 Cannot find device "nvmf_tgt_br2" 00:09:06.451 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:09:06.451 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:06.451 Cannot find device "nvmf_br" 00:09:06.451 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:09:06.451 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:06.451 Cannot find device "nvmf_init_if" 00:09:06.451 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:09:06.451 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:06.451 Cannot find device "nvmf_init_if2" 00:09:06.451 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:09:06.451 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:06.451 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:06.451 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:09:06.451 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:06.451 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:06.451 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:09:06.451 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:06.451 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:06.451 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:06.451 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:06.451 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:06.451 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:06.451 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:06.451 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:06.451 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:06.451 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:06.451 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:06.451 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:06.451 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:06.451 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:06.451 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:06.451 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:06.451 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:06.451 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:06.451 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:06.451 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:06.451 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:06.451 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:06.451 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:06.451 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:06.709 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:06.709 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:06.709 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:06.709 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:06.709 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:06.709 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:06.709 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:06.709 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:06.709 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:06.709 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:06.709 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:09:06.709 00:09:06.709 --- 10.0.0.3 ping statistics --- 00:09:06.709 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:06.709 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:09:06.709 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:06.709 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:06.709 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.058 ms 00:09:06.709 00:09:06.709 --- 10.0.0.4 ping statistics --- 00:09:06.709 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:06.709 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:09:06.709 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:06.709 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:06.709 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:09:06.709 00:09:06.709 --- 10.0.0.1 ping statistics --- 00:09:06.709 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:06.709 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:09:06.709 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:06.709 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:06.709 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.152 ms 00:09:06.709 00:09:06.709 --- 10.0.0.2 ping statistics --- 00:09:06.709 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:06.709 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:09:06.709 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:06.709 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@461 -- # return 0 00:09:06.709 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:06.709 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:06.709 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:06.709 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:06.709 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:06.709 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:06.709 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:06.709 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:06.709 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:06.709 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:06.709 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:06.709 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=66158 00:09:06.709 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:06.709 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 66158 00:09:06.709 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 66158 ']' 00:09:06.709 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:06.709 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:06.709 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:06.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:06.709 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:06.709 13:48:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:06.709 [2024-12-06 13:48:05.991958] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:09:06.709 [2024-12-06 13:48:05.992025] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:06.968 [2024-12-06 13:48:06.133862] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:06.968 [2024-12-06 13:48:06.212142] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:06.968 [2024-12-06 13:48:06.212194] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:06.968 [2024-12-06 13:48:06.212204] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:06.968 [2024-12-06 13:48:06.212212] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:06.968 [2024-12-06 13:48:06.212219] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:06.968 [2024-12-06 13:48:06.213584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:06.968 [2024-12-06 13:48:06.213734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:06.968 [2024-12-06 13:48:06.213969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.968 [2024-12-06 13:48:06.213832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:06.968 [2024-12-06 13:48:06.285834] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:07.903 13:48:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:07.903 13:48:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:09:07.903 13:48:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:07.903 13:48:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:07.903 13:48:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:07.903 13:48:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:07.903 13:48:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:07.903 [2024-12-06 13:48:07.265784] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:07.903 13:48:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:08.469 13:48:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:08.469 13:48:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:08.728 13:48:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:08.728 13:48:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:08.987 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:08.987 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:09.246 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:09.246 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:09.505 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:09.763 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:09.763 13:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:10.022 13:48:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:10.022 13:48:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:10.280 13:48:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:10.280 13:48:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:10.539 13:48:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:10.797 13:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:10.797 13:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:11.056 13:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:11.056 13:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:11.314 13:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:11.314 [2024-12-06 13:48:10.694232] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:11.314 13:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:11.882 13:48:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:11.882 13:48:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --hostid=cfa2def7-c8af-457f-82a0-b312efdea7f4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:09:12.142 13:48:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:12.142 13:48:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:09:12.142 13:48:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:12.142 13:48:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:09:12.142 13:48:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:09:12.142 13:48:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:09:14.049 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:14.049 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:14.049 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:14.049 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:09:14.049 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:14.049 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:09:14.049 13:48:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:14.049 [global] 00:09:14.049 thread=1 00:09:14.049 invalidate=1 00:09:14.049 rw=write 00:09:14.049 time_based=1 00:09:14.049 runtime=1 00:09:14.049 ioengine=libaio 00:09:14.049 direct=1 00:09:14.049 bs=4096 00:09:14.049 iodepth=1 00:09:14.049 norandommap=0 00:09:14.049 numjobs=1 00:09:14.049 00:09:14.049 verify_dump=1 00:09:14.049 verify_backlog=512 00:09:14.049 verify_state_save=0 00:09:14.049 do_verify=1 00:09:14.049 verify=crc32c-intel 00:09:14.049 [job0] 00:09:14.049 filename=/dev/nvme0n1 00:09:14.049 [job1] 00:09:14.049 filename=/dev/nvme0n2 00:09:14.049 [job2] 00:09:14.049 filename=/dev/nvme0n3 00:09:14.049 [job3] 00:09:14.049 filename=/dev/nvme0n4 00:09:14.309 Could not set queue depth (nvme0n1) 00:09:14.309 Could not set queue depth (nvme0n2) 00:09:14.309 Could not set queue depth (nvme0n3) 00:09:14.309 Could not set queue depth (nvme0n4) 00:09:14.309 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:14.309 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:14.309 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:14.309 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:14.309 fio-3.35 00:09:14.309 Starting 4 threads 00:09:15.688 00:09:15.688 job0: (groupid=0, jobs=1): err= 0: pid=66348: Fri Dec 6 13:48:14 2024 00:09:15.688 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:09:15.688 slat (nsec): min=15144, max=84030, avg=21852.82, stdev=8303.72 00:09:15.688 clat (usec): min=202, max=2921, avg=320.19, stdev=109.95 00:09:15.688 lat (usec): min=219, max=2952, avg=342.04, stdev=113.06 00:09:15.688 clat percentiles (usec): 00:09:15.688 | 1.00th=[ 225], 5.00th=[ 239], 10.00th=[ 249], 20.00th=[ 265], 00:09:15.688 | 30.00th=[ 273], 40.00th=[ 281], 50.00th=[ 293], 60.00th=[ 310], 00:09:15.688 | 70.00th=[ 322], 80.00th=[ 347], 90.00th=[ 433], 95.00th=[ 515], 00:09:15.688 | 99.00th=[ 635], 99.50th=[ 676], 99.90th=[ 1418], 99.95th=[ 2933], 00:09:15.688 | 99.99th=[ 2933] 00:09:15.688 write: IOPS=1870, BW=7481KiB/s (7660kB/s)(7488KiB/1001msec); 0 zone resets 00:09:15.688 slat (usec): min=22, max=146, avg=29.58, stdev= 8.05 00:09:15.688 clat (usec): min=103, max=769, avg=219.53, stdev=57.99 00:09:15.688 lat (usec): min=127, max=796, avg=249.10, stdev=59.14 00:09:15.688 clat percentiles (usec): 00:09:15.688 | 1.00th=[ 120], 5.00th=[ 137], 10.00th=[ 151], 20.00th=[ 169], 00:09:15.688 | 30.00th=[ 186], 40.00th=[ 204], 50.00th=[ 217], 60.00th=[ 233], 00:09:15.688 | 70.00th=[ 247], 80.00th=[ 265], 90.00th=[ 285], 95.00th=[ 306], 00:09:15.688 | 99.00th=[ 396], 99.50th=[ 441], 99.90th=[ 594], 99.95th=[ 766], 00:09:15.688 | 99.99th=[ 766] 00:09:15.688 bw ( KiB/s): min= 8192, max= 8192, per=25.63%, avg=8192.00, stdev= 0.00, samples=1 00:09:15.688 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:15.688 lat (usec) : 250=44.28%, 500=53.14%, 750=2.46%, 1000=0.06% 00:09:15.688 lat (msec) : 2=0.03%, 4=0.03% 00:09:15.688 cpu : usr=1.90%, sys=7.00%, ctx=3408, majf=0, minf=3 00:09:15.688 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:15.688 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:15.688 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:15.688 issued rwts: total=1536,1872,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:15.688 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:15.688 job1: (groupid=0, jobs=1): err= 0: pid=66349: Fri Dec 6 13:48:14 2024 00:09:15.688 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:09:15.688 slat (nsec): min=11614, max=60738, avg=15276.60, stdev=5706.79 00:09:15.688 clat (usec): min=124, max=1920, avg=195.54, stdev=44.86 00:09:15.688 lat (usec): min=143, max=1933, avg=210.82, stdev=45.17 00:09:15.688 clat percentiles (usec): 00:09:15.688 | 1.00th=[ 143], 5.00th=[ 153], 10.00th=[ 159], 20.00th=[ 169], 00:09:15.688 | 30.00th=[ 178], 40.00th=[ 186], 50.00th=[ 192], 60.00th=[ 200], 00:09:15.688 | 70.00th=[ 208], 80.00th=[ 219], 90.00th=[ 235], 95.00th=[ 249], 00:09:15.688 | 99.00th=[ 273], 99.50th=[ 285], 99.90th=[ 306], 99.95th=[ 314], 00:09:15.688 | 99.99th=[ 1926] 00:09:15.688 write: IOPS=2707, BW=10.6MiB/s (11.1MB/s)(10.6MiB/1001msec); 0 zone resets 00:09:15.688 slat (usec): min=14, max=150, avg=22.92, stdev= 7.71 00:09:15.688 clat (usec): min=77, max=806, avg=143.77, stdev=29.83 00:09:15.688 lat (usec): min=107, max=831, avg=166.69, stdev=30.68 00:09:15.688 clat percentiles (usec): 00:09:15.688 | 1.00th=[ 100], 5.00th=[ 108], 10.00th=[ 113], 20.00th=[ 121], 00:09:15.688 | 30.00th=[ 128], 40.00th=[ 135], 50.00th=[ 141], 60.00th=[ 147], 00:09:15.688 | 70.00th=[ 155], 80.00th=[ 163], 90.00th=[ 180], 95.00th=[ 196], 00:09:15.688 | 99.00th=[ 221], 99.50th=[ 235], 99.90th=[ 265], 99.95th=[ 289], 00:09:15.688 | 99.99th=[ 807] 00:09:15.688 bw ( KiB/s): min=12288, max=12288, per=38.45%, avg=12288.00, stdev= 0.00, samples=1 00:09:15.688 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:15.688 lat (usec) : 100=0.66%, 250=96.81%, 500=2.49%, 1000=0.02% 00:09:15.688 lat (msec) : 2=0.02% 00:09:15.688 cpu : usr=1.70%, sys=8.10%, ctx=5271, majf=0, minf=11 00:09:15.688 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:15.688 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:15.688 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:15.688 issued rwts: total=2560,2710,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:15.688 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:15.688 job2: (groupid=0, jobs=1): err= 0: pid=66350: Fri Dec 6 13:48:14 2024 00:09:15.688 read: IOPS=1427, BW=5710KiB/s (5847kB/s)(5716KiB/1001msec) 00:09:15.688 slat (usec): min=14, max=111, avg=25.01, stdev=12.74 00:09:15.688 clat (usec): min=205, max=740, avg=342.96, stdev=87.39 00:09:15.688 lat (usec): min=225, max=763, avg=367.97, stdev=95.07 00:09:15.688 clat percentiles (usec): 00:09:15.688 | 1.00th=[ 229], 5.00th=[ 245], 10.00th=[ 258], 20.00th=[ 273], 00:09:15.688 | 30.00th=[ 281], 40.00th=[ 297], 50.00th=[ 314], 60.00th=[ 334], 00:09:15.688 | 70.00th=[ 371], 80.00th=[ 429], 90.00th=[ 474], 95.00th=[ 519], 00:09:15.688 | 99.00th=[ 578], 99.50th=[ 619], 99.90th=[ 644], 99.95th=[ 742], 00:09:15.688 | 99.99th=[ 742] 00:09:15.688 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:09:15.688 slat (usec): min=18, max=131, avg=34.90, stdev=11.41 00:09:15.688 clat (usec): min=113, max=633, avg=268.16, stdev=90.57 00:09:15.688 lat (usec): min=137, max=692, avg=303.06, stdev=96.13 00:09:15.688 clat percentiles (usec): 00:09:15.688 | 1.00th=[ 128], 5.00th=[ 145], 10.00th=[ 167], 20.00th=[ 202], 00:09:15.688 | 30.00th=[ 221], 40.00th=[ 237], 50.00th=[ 251], 60.00th=[ 265], 00:09:15.688 | 70.00th=[ 285], 80.00th=[ 326], 90.00th=[ 412], 95.00th=[ 453], 00:09:15.688 | 99.00th=[ 529], 99.50th=[ 562], 99.90th=[ 627], 99.95th=[ 635], 00:09:15.688 | 99.99th=[ 635] 00:09:15.688 bw ( KiB/s): min= 7824, max= 7824, per=24.48%, avg=7824.00, stdev= 0.00, samples=1 00:09:15.688 iops : min= 1956, max= 1956, avg=1956.00, stdev= 0.00, samples=1 00:09:15.688 lat (usec) : 250=29.21%, 500=66.64%, 750=4.15% 00:09:15.688 cpu : usr=2.50%, sys=6.60%, ctx=2965, majf=0, minf=11 00:09:15.689 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:15.689 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:15.689 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:15.689 issued rwts: total=1429,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:15.689 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:15.689 job3: (groupid=0, jobs=1): err= 0: pid=66351: Fri Dec 6 13:48:14 2024 00:09:15.689 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:09:15.689 slat (nsec): min=14698, max=63655, avg=18357.75, stdev=5124.58 00:09:15.689 clat (usec): min=206, max=1302, avg=290.88, stdev=45.35 00:09:15.689 lat (usec): min=224, max=1355, avg=309.24, stdev=46.51 00:09:15.689 clat percentiles (usec): 00:09:15.689 | 1.00th=[ 229], 5.00th=[ 243], 10.00th=[ 249], 20.00th=[ 262], 00:09:15.689 | 30.00th=[ 269], 40.00th=[ 281], 50.00th=[ 285], 60.00th=[ 293], 00:09:15.689 | 70.00th=[ 306], 80.00th=[ 318], 90.00th=[ 334], 95.00th=[ 351], 00:09:15.689 | 99.00th=[ 400], 99.50th=[ 429], 99.90th=[ 668], 99.95th=[ 1303], 00:09:15.689 | 99.99th=[ 1303] 00:09:15.689 write: IOPS=1878, BW=7512KiB/s (7693kB/s)(7520KiB/1001msec); 0 zone resets 00:09:15.689 slat (usec): min=21, max=136, avg=31.50, stdev= 9.60 00:09:15.689 clat (usec): min=152, max=5137, avg=243.66, stdev=169.95 00:09:15.689 lat (usec): min=175, max=5160, avg=275.16, stdev=170.46 00:09:15.689 clat percentiles (usec): 00:09:15.689 | 1.00th=[ 172], 5.00th=[ 186], 10.00th=[ 194], 20.00th=[ 206], 00:09:15.689 | 30.00th=[ 217], 40.00th=[ 225], 50.00th=[ 233], 60.00th=[ 241], 00:09:15.689 | 70.00th=[ 251], 80.00th=[ 265], 90.00th=[ 277], 95.00th=[ 293], 00:09:15.689 | 99.00th=[ 338], 99.50th=[ 424], 99.90th=[ 3359], 99.95th=[ 5145], 00:09:15.689 | 99.99th=[ 5145] 00:09:15.689 bw ( KiB/s): min= 8192, max= 8192, per=25.63%, avg=8192.00, stdev= 0.00, samples=1 00:09:15.689 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:15.689 lat (usec) : 250=42.77%, 500=56.82%, 750=0.18%, 1000=0.03% 00:09:15.689 lat (msec) : 2=0.09%, 4=0.09%, 10=0.03% 00:09:15.689 cpu : usr=1.90%, sys=6.90%, ctx=3418, majf=0, minf=11 00:09:15.689 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:15.689 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:15.689 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:15.689 issued rwts: total=1536,1880,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:15.689 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:15.689 00:09:15.689 Run status group 0 (all jobs): 00:09:15.689 READ: bw=27.6MiB/s (28.9MB/s), 5710KiB/s-9.99MiB/s (5847kB/s-10.5MB/s), io=27.6MiB (28.9MB), run=1001-1001msec 00:09:15.689 WRITE: bw=31.2MiB/s (32.7MB/s), 6138KiB/s-10.6MiB/s (6285kB/s-11.1MB/s), io=31.2MiB (32.8MB), run=1001-1001msec 00:09:15.689 00:09:15.689 Disk stats (read/write): 00:09:15.689 nvme0n1: ios=1374/1536, merge=0/0, ticks=479/361, in_queue=840, util=88.08% 00:09:15.689 nvme0n2: ios=2082/2470, merge=0/0, ticks=428/384, in_queue=812, util=88.00% 00:09:15.689 nvme0n3: ios=1058/1536, merge=0/0, ticks=379/424, in_queue=803, util=89.30% 00:09:15.689 nvme0n4: ios=1351/1536, merge=0/0, ticks=401/384, in_queue=785, util=88.91% 00:09:15.689 13:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:15.689 [global] 00:09:15.689 thread=1 00:09:15.689 invalidate=1 00:09:15.689 rw=randwrite 00:09:15.689 time_based=1 00:09:15.689 runtime=1 00:09:15.689 ioengine=libaio 00:09:15.689 direct=1 00:09:15.689 bs=4096 00:09:15.689 iodepth=1 00:09:15.689 norandommap=0 00:09:15.689 numjobs=1 00:09:15.689 00:09:15.689 verify_dump=1 00:09:15.689 verify_backlog=512 00:09:15.689 verify_state_save=0 00:09:15.689 do_verify=1 00:09:15.689 verify=crc32c-intel 00:09:15.689 [job0] 00:09:15.689 filename=/dev/nvme0n1 00:09:15.689 [job1] 00:09:15.689 filename=/dev/nvme0n2 00:09:15.689 [job2] 00:09:15.689 filename=/dev/nvme0n3 00:09:15.689 [job3] 00:09:15.689 filename=/dev/nvme0n4 00:09:15.689 Could not set queue depth (nvme0n1) 00:09:15.689 Could not set queue depth (nvme0n2) 00:09:15.689 Could not set queue depth (nvme0n3) 00:09:15.689 Could not set queue depth (nvme0n4) 00:09:15.689 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:15.689 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:15.689 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:15.689 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:15.689 fio-3.35 00:09:15.689 Starting 4 threads 00:09:17.064 00:09:17.064 job0: (groupid=0, jobs=1): err= 0: pid=66409: Fri Dec 6 13:48:16 2024 00:09:17.064 read: IOPS=2353, BW=9415KiB/s (9641kB/s)(9424KiB/1001msec) 00:09:17.064 slat (nsec): min=11680, max=67370, avg=15403.94, stdev=5172.93 00:09:17.064 clat (usec): min=126, max=1782, avg=218.31, stdev=49.93 00:09:17.064 lat (usec): min=140, max=1795, avg=233.72, stdev=50.30 00:09:17.064 clat percentiles (usec): 00:09:17.064 | 1.00th=[ 147], 5.00th=[ 159], 10.00th=[ 169], 20.00th=[ 182], 00:09:17.064 | 30.00th=[ 196], 40.00th=[ 208], 50.00th=[ 219], 60.00th=[ 229], 00:09:17.064 | 70.00th=[ 239], 80.00th=[ 249], 90.00th=[ 265], 95.00th=[ 281], 00:09:17.064 | 99.00th=[ 314], 99.50th=[ 330], 99.90th=[ 371], 99.95th=[ 586], 00:09:17.064 | 99.99th=[ 1778] 00:09:17.064 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:09:17.064 slat (usec): min=16, max=101, avg=20.98, stdev= 5.19 00:09:17.064 clat (usec): min=85, max=290, avg=151.31, stdev=33.42 00:09:17.064 lat (usec): min=103, max=318, avg=172.29, stdev=34.75 00:09:17.064 clat percentiles (usec): 00:09:17.064 | 1.00th=[ 99], 5.00th=[ 108], 10.00th=[ 113], 20.00th=[ 122], 00:09:17.064 | 30.00th=[ 129], 40.00th=[ 137], 50.00th=[ 145], 60.00th=[ 155], 00:09:17.064 | 70.00th=[ 167], 80.00th=[ 182], 90.00th=[ 198], 95.00th=[ 212], 00:09:17.064 | 99.00th=[ 243], 99.50th=[ 253], 99.90th=[ 281], 99.95th=[ 285], 00:09:17.064 | 99.99th=[ 293] 00:09:17.064 bw ( KiB/s): min=11624, max=11624, per=29.90%, avg=11624.00, stdev= 0.00, samples=1 00:09:17.064 iops : min= 2906, max= 2906, avg=2906.00, stdev= 0.00, samples=1 00:09:17.064 lat (usec) : 100=0.67%, 250=90.05%, 500=9.24%, 750=0.02% 00:09:17.064 lat (msec) : 2=0.02% 00:09:17.064 cpu : usr=2.30%, sys=6.50%, ctx=4916, majf=0, minf=7 00:09:17.064 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:17.064 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:17.064 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:17.064 issued rwts: total=2356,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:17.064 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:17.064 job1: (groupid=0, jobs=1): err= 0: pid=66410: Fri Dec 6 13:48:16 2024 00:09:17.064 read: IOPS=2385, BW=9542KiB/s (9771kB/s)(9552KiB/1001msec) 00:09:17.064 slat (nsec): min=11945, max=86035, avg=13957.17, stdev=3648.39 00:09:17.064 clat (usec): min=133, max=3287, avg=217.02, stdev=74.89 00:09:17.064 lat (usec): min=145, max=3302, avg=230.98, stdev=75.10 00:09:17.064 clat percentiles (usec): 00:09:17.064 | 1.00th=[ 147], 5.00th=[ 157], 10.00th=[ 165], 20.00th=[ 180], 00:09:17.064 | 30.00th=[ 192], 40.00th=[ 204], 50.00th=[ 217], 60.00th=[ 227], 00:09:17.064 | 70.00th=[ 237], 80.00th=[ 247], 90.00th=[ 265], 95.00th=[ 277], 00:09:17.064 | 99.00th=[ 314], 99.50th=[ 338], 99.90th=[ 635], 99.95th=[ 693], 00:09:17.064 | 99.99th=[ 3294] 00:09:17.064 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:09:17.064 slat (nsec): min=14635, max=82398, avg=19851.58, stdev=4287.88 00:09:17.064 clat (usec): min=89, max=1950, avg=152.33, stdev=49.17 00:09:17.064 lat (usec): min=107, max=1969, avg=172.18, stdev=49.67 00:09:17.064 clat percentiles (usec): 00:09:17.064 | 1.00th=[ 99], 5.00th=[ 110], 10.00th=[ 115], 20.00th=[ 123], 00:09:17.064 | 30.00th=[ 129], 40.00th=[ 137], 50.00th=[ 145], 60.00th=[ 157], 00:09:17.064 | 70.00th=[ 167], 80.00th=[ 182], 90.00th=[ 200], 95.00th=[ 212], 00:09:17.064 | 99.00th=[ 245], 99.50th=[ 258], 99.90th=[ 334], 99.95th=[ 486], 00:09:17.064 | 99.99th=[ 1958] 00:09:17.064 bw ( KiB/s): min=12160, max=12160, per=31.28%, avg=12160.00, stdev= 0.00, samples=1 00:09:17.064 iops : min= 3040, max= 3040, avg=3040.00, stdev= 0.00, samples=1 00:09:17.064 lat (usec) : 100=0.81%, 250=89.89%, 500=9.22%, 750=0.04% 00:09:17.064 lat (msec) : 2=0.02%, 4=0.02% 00:09:17.064 cpu : usr=1.70%, sys=6.60%, ctx=4950, majf=0, minf=13 00:09:17.064 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:17.064 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:17.064 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:17.064 issued rwts: total=2388,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:17.064 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:17.064 job2: (groupid=0, jobs=1): err= 0: pid=66411: Fri Dec 6 13:48:16 2024 00:09:17.064 read: IOPS=1875, BW=7500KiB/s (7681kB/s)(7508KiB/1001msec) 00:09:17.064 slat (nsec): min=12546, max=55461, avg=16217.86, stdev=4002.92 00:09:17.064 clat (usec): min=183, max=795, avg=260.94, stdev=36.24 00:09:17.064 lat (usec): min=196, max=819, avg=277.16, stdev=37.08 00:09:17.064 clat percentiles (usec): 00:09:17.064 | 1.00th=[ 204], 5.00th=[ 215], 10.00th=[ 223], 20.00th=[ 233], 00:09:17.064 | 30.00th=[ 241], 40.00th=[ 249], 50.00th=[ 258], 60.00th=[ 265], 00:09:17.064 | 70.00th=[ 277], 80.00th=[ 285], 90.00th=[ 302], 95.00th=[ 318], 00:09:17.064 | 99.00th=[ 351], 99.50th=[ 367], 99.90th=[ 709], 99.95th=[ 799], 00:09:17.064 | 99.99th=[ 799] 00:09:17.064 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:09:17.064 slat (nsec): min=17620, max=90158, avg=24086.35, stdev=6141.25 00:09:17.064 clat (usec): min=128, max=6628, avg=206.71, stdev=182.27 00:09:17.064 lat (usec): min=151, max=6659, avg=230.80, stdev=182.73 00:09:17.064 clat percentiles (usec): 00:09:17.064 | 1.00th=[ 147], 5.00th=[ 159], 10.00th=[ 165], 20.00th=[ 174], 00:09:17.064 | 30.00th=[ 180], 40.00th=[ 186], 50.00th=[ 194], 60.00th=[ 202], 00:09:17.064 | 70.00th=[ 212], 80.00th=[ 223], 90.00th=[ 241], 95.00th=[ 258], 00:09:17.064 | 99.00th=[ 289], 99.50th=[ 310], 99.90th=[ 3130], 99.95th=[ 3621], 00:09:17.064 | 99.99th=[ 6652] 00:09:17.064 bw ( KiB/s): min= 8192, max= 8192, per=21.07%, avg=8192.00, stdev= 0.00, samples=1 00:09:17.064 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:17.064 lat (usec) : 250=68.43%, 500=31.31%, 750=0.05%, 1000=0.10% 00:09:17.064 lat (msec) : 2=0.03%, 4=0.05%, 10=0.03% 00:09:17.064 cpu : usr=2.10%, sys=5.80%, ctx=3929, majf=0, minf=15 00:09:17.064 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:17.064 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:17.064 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:17.064 issued rwts: total=1877,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:17.064 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:17.064 job3: (groupid=0, jobs=1): err= 0: pid=66412: Fri Dec 6 13:48:16 2024 00:09:17.064 read: IOPS=2181, BW=8727KiB/s (8937kB/s)(8736KiB/1001msec) 00:09:17.064 slat (nsec): min=11049, max=60918, avg=14584.49, stdev=4074.46 00:09:17.064 clat (usec): min=150, max=2866, avg=224.22, stdev=71.06 00:09:17.064 lat (usec): min=162, max=2882, avg=238.81, stdev=71.46 00:09:17.064 clat percentiles (usec): 00:09:17.064 | 1.00th=[ 163], 5.00th=[ 174], 10.00th=[ 180], 20.00th=[ 190], 00:09:17.064 | 30.00th=[ 200], 40.00th=[ 210], 50.00th=[ 219], 60.00th=[ 227], 00:09:17.064 | 70.00th=[ 237], 80.00th=[ 251], 90.00th=[ 269], 95.00th=[ 285], 00:09:17.064 | 99.00th=[ 334], 99.50th=[ 379], 99.90th=[ 717], 99.95th=[ 1156], 00:09:17.064 | 99.99th=[ 2868] 00:09:17.064 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:09:17.064 slat (nsec): min=14525, max=67878, avg=21232.91, stdev=4474.78 00:09:17.064 clat (usec): min=103, max=713, avg=162.54, stdev=32.24 00:09:17.064 lat (usec): min=121, max=736, avg=183.77, stdev=33.24 00:09:17.064 clat percentiles (usec): 00:09:17.064 | 1.00th=[ 115], 5.00th=[ 122], 10.00th=[ 127], 20.00th=[ 135], 00:09:17.064 | 30.00th=[ 143], 40.00th=[ 151], 50.00th=[ 159], 60.00th=[ 167], 00:09:17.064 | 70.00th=[ 178], 80.00th=[ 188], 90.00th=[ 204], 95.00th=[ 215], 00:09:17.064 | 99.00th=[ 251], 99.50th=[ 258], 99.90th=[ 355], 99.95th=[ 396], 00:09:17.064 | 99.99th=[ 717] 00:09:17.064 bw ( KiB/s): min=10264, max=10264, per=26.40%, avg=10264.00, stdev= 0.00, samples=1 00:09:17.064 iops : min= 2566, max= 2566, avg=2566.00, stdev= 0.00, samples=1 00:09:17.064 lat (usec) : 250=90.28%, 500=9.63%, 750=0.04% 00:09:17.064 lat (msec) : 2=0.02%, 4=0.02% 00:09:17.064 cpu : usr=1.80%, sys=6.90%, ctx=4744, majf=0, minf=11 00:09:17.064 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:17.064 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:17.064 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:17.065 issued rwts: total=2184,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:17.065 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:17.065 00:09:17.065 Run status group 0 (all jobs): 00:09:17.065 READ: bw=34.4MiB/s (36.0MB/s), 7500KiB/s-9542KiB/s (7681kB/s-9771kB/s), io=34.4MiB (36.1MB), run=1001-1001msec 00:09:17.065 WRITE: bw=38.0MiB/s (39.8MB/s), 8184KiB/s-9.99MiB/s (8380kB/s-10.5MB/s), io=38.0MiB (39.8MB), run=1001-1001msec 00:09:17.065 00:09:17.065 Disk stats (read/write): 00:09:17.065 nvme0n1: ios=2098/2197, merge=0/0, ticks=511/355, in_queue=866, util=88.78% 00:09:17.065 nvme0n2: ios=2084/2246, merge=0/0, ticks=468/361, in_queue=829, util=88.02% 00:09:17.065 nvme0n3: ios=1553/1858, merge=0/0, ticks=437/378, in_queue=815, util=89.03% 00:09:17.065 nvme0n4: ios=1996/2048, merge=0/0, ticks=458/353, in_queue=811, util=89.58% 00:09:17.065 13:48:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:17.065 [global] 00:09:17.065 thread=1 00:09:17.065 invalidate=1 00:09:17.065 rw=write 00:09:17.065 time_based=1 00:09:17.065 runtime=1 00:09:17.065 ioengine=libaio 00:09:17.065 direct=1 00:09:17.065 bs=4096 00:09:17.065 iodepth=128 00:09:17.065 norandommap=0 00:09:17.065 numjobs=1 00:09:17.065 00:09:17.065 verify_dump=1 00:09:17.065 verify_backlog=512 00:09:17.065 verify_state_save=0 00:09:17.065 do_verify=1 00:09:17.065 verify=crc32c-intel 00:09:17.065 [job0] 00:09:17.065 filename=/dev/nvme0n1 00:09:17.065 [job1] 00:09:17.065 filename=/dev/nvme0n2 00:09:17.065 [job2] 00:09:17.065 filename=/dev/nvme0n3 00:09:17.065 [job3] 00:09:17.065 filename=/dev/nvme0n4 00:09:17.065 Could not set queue depth (nvme0n1) 00:09:17.065 Could not set queue depth (nvme0n2) 00:09:17.065 Could not set queue depth (nvme0n3) 00:09:17.065 Could not set queue depth (nvme0n4) 00:09:17.065 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:17.065 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:17.065 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:17.065 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:17.065 fio-3.35 00:09:17.065 Starting 4 threads 00:09:18.561 00:09:18.561 job0: (groupid=0, jobs=1): err= 0: pid=66473: Fri Dec 6 13:48:17 2024 00:09:18.561 read: IOPS=3921, BW=15.3MiB/s (16.1MB/s)(15.4MiB/1004msec) 00:09:18.561 slat (usec): min=6, max=5238, avg=121.45, stdev=593.13 00:09:18.561 clat (usec): min=308, max=20091, avg=15851.84, stdev=2083.83 00:09:18.561 lat (usec): min=4334, max=20108, avg=15973.29, stdev=2007.32 00:09:18.561 clat percentiles (usec): 00:09:18.561 | 1.00th=[ 8356], 5.00th=[13173], 10.00th=[13960], 20.00th=[14746], 00:09:18.561 | 30.00th=[15008], 40.00th=[15401], 50.00th=[15795], 60.00th=[16188], 00:09:18.561 | 70.00th=[16712], 80.00th=[17433], 90.00th=[18482], 95.00th=[19268], 00:09:18.561 | 99.00th=[19792], 99.50th=[20055], 99.90th=[20055], 99.95th=[20055], 00:09:18.561 | 99.99th=[20055] 00:09:18.561 write: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec); 0 zone resets 00:09:18.561 slat (usec): min=13, max=4358, avg=119.59, stdev=534.19 00:09:18.561 clat (usec): min=11078, max=18534, avg=15690.05, stdev=1173.62 00:09:18.561 lat (usec): min=11541, max=18559, avg=15809.64, stdev=1053.48 00:09:18.561 clat percentiles (usec): 00:09:18.561 | 1.00th=[12125], 5.00th=[14091], 10.00th=[14353], 20.00th=[14746], 00:09:18.561 | 30.00th=[15008], 40.00th=[15270], 50.00th=[15664], 60.00th=[15926], 00:09:18.561 | 70.00th=[16319], 80.00th=[16712], 90.00th=[17171], 95.00th=[17695], 00:09:18.561 | 99.00th=[18482], 99.50th=[18482], 99.90th=[18482], 99.95th=[18482], 00:09:18.561 | 99.99th=[18482] 00:09:18.561 bw ( KiB/s): min=16384, max=16384, per=35.10%, avg=16384.00, stdev= 0.00, samples=2 00:09:18.561 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:09:18.561 lat (usec) : 500=0.01% 00:09:18.561 lat (msec) : 10=0.80%, 20=98.80%, 50=0.39% 00:09:18.561 cpu : usr=4.59%, sys=11.57%, ctx=255, majf=0, minf=13 00:09:18.561 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:18.561 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:18.561 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:18.561 issued rwts: total=3937,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:18.561 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:18.561 job1: (groupid=0, jobs=1): err= 0: pid=66474: Fri Dec 6 13:48:17 2024 00:09:18.561 read: IOPS=3961, BW=15.5MiB/s (16.2MB/s)(15.5MiB/1002msec) 00:09:18.561 slat (usec): min=7, max=5181, avg=120.49, stdev=585.71 00:09:18.561 clat (usec): min=414, max=20086, avg=15692.86, stdev=2080.85 00:09:18.561 lat (usec): min=5075, max=20099, avg=15813.35, stdev=2006.53 00:09:18.561 clat percentiles (usec): 00:09:18.561 | 1.00th=[ 8979], 5.00th=[13304], 10.00th=[13566], 20.00th=[14222], 00:09:18.561 | 30.00th=[14877], 40.00th=[15139], 50.00th=[15533], 60.00th=[16057], 00:09:18.561 | 70.00th=[16581], 80.00th=[17171], 90.00th=[18482], 95.00th=[19268], 00:09:18.561 | 99.00th=[19792], 99.50th=[20055], 99.90th=[20055], 99.95th=[20055], 00:09:18.561 | 99.99th=[20055] 00:09:18.561 write: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec); 0 zone resets 00:09:18.561 slat (usec): min=10, max=4318, avg=119.18, stdev=530.33 00:09:18.561 clat (usec): min=11215, max=18545, avg=15664.73, stdev=1123.27 00:09:18.561 lat (usec): min=11695, max=18570, avg=15783.91, stdev=996.86 00:09:18.561 clat percentiles (usec): 00:09:18.561 | 1.00th=[12256], 5.00th=[14091], 10.00th=[14353], 20.00th=[14746], 00:09:18.561 | 30.00th=[15008], 40.00th=[15270], 50.00th=[15664], 60.00th=[16057], 00:09:18.561 | 70.00th=[16319], 80.00th=[16581], 90.00th=[16909], 95.00th=[17433], 00:09:18.561 | 99.00th=[18482], 99.50th=[18482], 99.90th=[18482], 99.95th=[18482], 00:09:18.561 | 99.99th=[18482] 00:09:18.561 bw ( KiB/s): min=16384, max=16416, per=35.14%, avg=16400.00, stdev=22.63, samples=2 00:09:18.561 iops : min= 4096, max= 4104, avg=4100.00, stdev= 5.66, samples=2 00:09:18.561 lat (usec) : 500=0.01% 00:09:18.561 lat (msec) : 10=0.79%, 20=98.81%, 50=0.38% 00:09:18.561 cpu : usr=4.60%, sys=11.89%, ctx=253, majf=0, minf=15 00:09:18.561 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:18.561 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:18.561 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:18.561 issued rwts: total=3969,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:18.561 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:18.561 job2: (groupid=0, jobs=1): err= 0: pid=66475: Fri Dec 6 13:48:17 2024 00:09:18.561 read: IOPS=1720, BW=6882KiB/s (7047kB/s)(6916KiB/1005msec) 00:09:18.561 slat (usec): min=6, max=11623, avg=273.41, stdev=1452.30 00:09:18.561 clat (usec): min=1818, max=47315, avg=33682.81, stdev=7159.37 00:09:18.561 lat (usec): min=10081, max=47333, avg=33956.21, stdev=7053.60 00:09:18.561 clat percentiles (usec): 00:09:18.561 | 1.00th=[10421], 5.00th=[24249], 10.00th=[28181], 20.00th=[29492], 00:09:18.562 | 30.00th=[30016], 40.00th=[30540], 50.00th=[31589], 60.00th=[33424], 00:09:18.562 | 70.00th=[36963], 80.00th=[39584], 90.00th=[45351], 95.00th=[46400], 00:09:18.562 | 99.00th=[47449], 99.50th=[47449], 99.90th=[47449], 99.95th=[47449], 00:09:18.562 | 99.99th=[47449] 00:09:18.562 write: IOPS=2037, BW=8151KiB/s (8347kB/s)(8192KiB/1005msec); 0 zone resets 00:09:18.562 slat (usec): min=13, max=12092, avg=250.64, stdev=1317.23 00:09:18.562 clat (usec): min=18942, max=49840, avg=32686.12, stdev=7793.07 00:09:18.562 lat (usec): min=23949, max=49873, avg=32936.76, stdev=7737.02 00:09:18.562 clat percentiles (usec): 00:09:18.562 | 1.00th=[21365], 5.00th=[24773], 10.00th=[25035], 20.00th=[25822], 00:09:18.562 | 30.00th=[27132], 40.00th=[27919], 50.00th=[31327], 60.00th=[32113], 00:09:18.562 | 70.00th=[35390], 80.00th=[40109], 90.00th=[46924], 95.00th=[47973], 00:09:18.562 | 99.00th=[49546], 99.50th=[49546], 99.90th=[50070], 99.95th=[50070], 00:09:18.562 | 99.99th=[50070] 00:09:18.562 bw ( KiB/s): min= 8192, max= 8208, per=17.57%, avg=8200.00, stdev=11.31, samples=2 00:09:18.562 iops : min= 2048, max= 2052, avg=2050.00, stdev= 2.83, samples=2 00:09:18.562 lat (msec) : 2=0.03%, 20=1.88%, 50=98.09% 00:09:18.562 cpu : usr=2.89%, sys=5.28%, ctx=130, majf=0, minf=11 00:09:18.562 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.3% 00:09:18.562 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:18.562 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:18.562 issued rwts: total=1729,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:18.562 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:18.562 job3: (groupid=0, jobs=1): err= 0: pid=66476: Fri Dec 6 13:48:17 2024 00:09:18.562 read: IOPS=1017, BW=4072KiB/s (4169kB/s)(4096KiB/1006msec) 00:09:18.562 slat (usec): min=6, max=12116, avg=323.85, stdev=1439.85 00:09:18.562 clat (usec): min=24214, max=61352, avg=40291.92, stdev=7442.01 00:09:18.562 lat (usec): min=29476, max=61369, avg=40615.77, stdev=7572.48 00:09:18.562 clat percentiles (usec): 00:09:18.562 | 1.00th=[29492], 5.00th=[31327], 10.00th=[33162], 20.00th=[33424], 00:09:18.562 | 30.00th=[34866], 40.00th=[35390], 50.00th=[38011], 60.00th=[42206], 00:09:18.562 | 70.00th=[45351], 80.00th=[47973], 90.00th=[48497], 95.00th=[53740], 00:09:18.562 | 99.00th=[59507], 99.50th=[60031], 99.90th=[61080], 99.95th=[61604], 00:09:18.562 | 99.99th=[61604] 00:09:18.562 write: IOPS=1489, BW=5956KiB/s (6099kB/s)(5992KiB/1006msec); 0 zone resets 00:09:18.562 slat (usec): min=16, max=11551, avg=438.18, stdev=1579.20 00:09:18.562 clat (msec): min=4, max=109, avg=56.53, stdev=22.08 00:09:18.562 lat (msec): min=10, max=109, avg=56.97, stdev=22.19 00:09:18.562 clat percentiles (msec): 00:09:18.562 | 1.00th=[ 13], 5.00th=[ 35], 10.00th=[ 35], 20.00th=[ 36], 00:09:18.562 | 30.00th=[ 38], 40.00th=[ 47], 50.00th=[ 54], 60.00th=[ 57], 00:09:18.562 | 70.00th=[ 62], 80.00th=[ 81], 90.00th=[ 92], 95.00th=[ 97], 00:09:18.562 | 99.00th=[ 105], 99.50th=[ 109], 99.90th=[ 109], 99.95th=[ 109], 00:09:18.562 | 99.99th=[ 109] 00:09:18.562 bw ( KiB/s): min= 4928, max= 6044, per=11.75%, avg=5486.00, stdev=789.13, samples=2 00:09:18.562 iops : min= 1232, max= 1511, avg=1371.50, stdev=197.28, samples=2 00:09:18.562 lat (msec) : 10=0.04%, 20=0.83%, 50=62.41%, 100=34.38%, 250=2.34% 00:09:18.562 cpu : usr=2.09%, sys=4.28%, ctx=180, majf=0, minf=11 00:09:18.562 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.5% 00:09:18.562 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:18.562 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:18.562 issued rwts: total=1024,1498,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:18.562 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:18.562 00:09:18.562 Run status group 0 (all jobs): 00:09:18.562 READ: bw=41.4MiB/s (43.4MB/s), 4072KiB/s-15.5MiB/s (4169kB/s-16.2MB/s), io=41.6MiB (43.7MB), run=1002-1006msec 00:09:18.562 WRITE: bw=45.6MiB/s (47.8MB/s), 5956KiB/s-16.0MiB/s (6099kB/s-16.7MB/s), io=45.9MiB (48.1MB), run=1002-1006msec 00:09:18.562 00:09:18.562 Disk stats (read/write): 00:09:18.562 nvme0n1: ios=3410/3584, merge=0/0, ticks=12256/12139, in_queue=24395, util=89.78% 00:09:18.562 nvme0n2: ios=3432/3584, merge=0/0, ticks=12164/12110, in_queue=24274, util=88.84% 00:09:18.562 nvme0n3: ios=1557/1600, merge=0/0, ticks=13100/12978, in_queue=26078, util=89.58% 00:09:18.562 nvme0n4: ios=1041/1263, merge=0/0, ticks=13798/21061, in_queue=34859, util=89.95% 00:09:18.562 13:48:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:18.562 [global] 00:09:18.562 thread=1 00:09:18.562 invalidate=1 00:09:18.562 rw=randwrite 00:09:18.562 time_based=1 00:09:18.562 runtime=1 00:09:18.562 ioengine=libaio 00:09:18.562 direct=1 00:09:18.562 bs=4096 00:09:18.562 iodepth=128 00:09:18.562 norandommap=0 00:09:18.562 numjobs=1 00:09:18.562 00:09:18.562 verify_dump=1 00:09:18.562 verify_backlog=512 00:09:18.562 verify_state_save=0 00:09:18.562 do_verify=1 00:09:18.562 verify=crc32c-intel 00:09:18.562 [job0] 00:09:18.562 filename=/dev/nvme0n1 00:09:18.562 [job1] 00:09:18.562 filename=/dev/nvme0n2 00:09:18.562 [job2] 00:09:18.562 filename=/dev/nvme0n3 00:09:18.562 [job3] 00:09:18.562 filename=/dev/nvme0n4 00:09:18.562 Could not set queue depth (nvme0n1) 00:09:18.562 Could not set queue depth (nvme0n2) 00:09:18.562 Could not set queue depth (nvme0n3) 00:09:18.562 Could not set queue depth (nvme0n4) 00:09:18.562 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:18.562 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:18.562 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:18.562 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:18.562 fio-3.35 00:09:18.562 Starting 4 threads 00:09:19.943 00:09:19.943 job0: (groupid=0, jobs=1): err= 0: pid=66529: Fri Dec 6 13:48:18 2024 00:09:19.943 read: IOPS=3980, BW=15.5MiB/s (16.3MB/s)(15.6MiB/1004msec) 00:09:19.943 slat (usec): min=9, max=4539, avg=119.59, stdev=577.19 00:09:19.943 clat (usec): min=2859, max=18780, avg=15706.87, stdev=1693.42 00:09:19.943 lat (usec): min=2877, max=18802, avg=15826.46, stdev=1601.77 00:09:19.943 clat percentiles (usec): 00:09:19.943 | 1.00th=[ 7111], 5.00th=[13435], 10.00th=[14877], 20.00th=[15270], 00:09:19.943 | 30.00th=[15401], 40.00th=[15664], 50.00th=[15795], 60.00th=[16057], 00:09:19.943 | 70.00th=[16319], 80.00th=[16450], 90.00th=[16909], 95.00th=[17957], 00:09:19.943 | 99.00th=[18744], 99.50th=[18744], 99.90th=[18744], 99.95th=[18744], 00:09:19.943 | 99.99th=[18744] 00:09:19.943 write: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec); 0 zone resets 00:09:19.943 slat (usec): min=12, max=3988, avg=118.90, stdev=528.87 00:09:19.943 clat (usec): min=10953, max=18431, avg=15605.52, stdev=1090.59 00:09:19.943 lat (usec): min=11074, max=18452, avg=15724.42, stdev=960.11 00:09:19.943 clat percentiles (usec): 00:09:19.943 | 1.00th=[12256], 5.00th=[14353], 10.00th=[14484], 20.00th=[14877], 00:09:19.943 | 30.00th=[15008], 40.00th=[15270], 50.00th=[15533], 60.00th=[15795], 00:09:19.943 | 70.00th=[16057], 80.00th=[16319], 90.00th=[16909], 95.00th=[17695], 00:09:19.943 | 99.00th=[18220], 99.50th=[18482], 99.90th=[18482], 99.95th=[18482], 00:09:19.943 | 99.99th=[18482] 00:09:19.943 bw ( KiB/s): min=16384, max=16384, per=28.17%, avg=16384.00, stdev= 0.00, samples=2 00:09:19.943 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:09:19.943 lat (msec) : 4=0.35%, 10=0.40%, 20=99.26% 00:09:19.943 cpu : usr=3.99%, sys=12.26%, ctx=253, majf=0, minf=9 00:09:19.943 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:19.943 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:19.943 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:19.943 issued rwts: total=3996,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:19.943 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:19.943 job1: (groupid=0, jobs=1): err= 0: pid=66530: Fri Dec 6 13:48:18 2024 00:09:19.943 read: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec) 00:09:19.943 slat (usec): min=4, max=7421, avg=119.48, stdev=533.06 00:09:19.943 clat (usec): min=3068, max=23067, avg=15472.66, stdev=1917.84 00:09:19.943 lat (usec): min=3079, max=23100, avg=15592.15, stdev=1927.07 00:09:19.943 clat percentiles (usec): 00:09:19.943 | 1.00th=[ 6521], 5.00th=[12518], 10.00th=[13698], 20.00th=[14746], 00:09:19.943 | 30.00th=[15270], 40.00th=[15401], 50.00th=[15664], 60.00th=[15795], 00:09:19.943 | 70.00th=[16057], 80.00th=[16450], 90.00th=[16909], 95.00th=[18220], 00:09:19.943 | 99.00th=[20317], 99.50th=[21103], 99.90th=[21890], 99.95th=[22414], 00:09:19.943 | 99.99th=[22938] 00:09:19.943 write: IOPS=4090, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1002msec); 0 zone resets 00:09:19.943 slat (usec): min=10, max=8433, avg=115.86, stdev=670.17 00:09:19.943 clat (usec): min=838, max=26473, avg=15395.87, stdev=1998.61 00:09:19.943 lat (usec): min=2998, max=26553, avg=15511.72, stdev=2091.74 00:09:19.943 clat percentiles (usec): 00:09:19.943 | 1.00th=[10028], 5.00th=[12649], 10.00th=[13435], 20.00th=[14222], 00:09:19.943 | 30.00th=[14615], 40.00th=[14877], 50.00th=[15139], 60.00th=[15533], 00:09:19.943 | 70.00th=[16057], 80.00th=[16581], 90.00th=[18220], 95.00th=[19006], 00:09:19.943 | 99.00th=[20579], 99.50th=[21627], 99.90th=[23987], 99.95th=[25822], 00:09:19.943 | 99.99th=[26346] 00:09:19.943 bw ( KiB/s): min=16384, max=16416, per=28.20%, avg=16400.00, stdev=22.63, samples=2 00:09:19.943 iops : min= 4096, max= 4104, avg=4100.00, stdev= 5.66, samples=2 00:09:19.943 lat (usec) : 1000=0.01% 00:09:19.943 lat (msec) : 4=0.23%, 10=0.93%, 20=97.10%, 50=1.73% 00:09:19.943 cpu : usr=4.00%, sys=12.69%, ctx=314, majf=0, minf=11 00:09:19.943 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:19.943 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:19.943 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:19.943 issued rwts: total=4096,4099,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:19.943 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:19.943 job2: (groupid=0, jobs=1): err= 0: pid=66531: Fri Dec 6 13:48:18 2024 00:09:19.943 read: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec) 00:09:19.943 slat (usec): min=7, max=4526, avg=129.42, stdev=630.49 00:09:19.943 clat (usec): min=12089, max=18766, avg=17131.87, stdev=946.42 00:09:19.943 lat (usec): min=15340, max=18780, avg=17261.29, stdev=714.12 00:09:19.943 clat percentiles (usec): 00:09:19.943 | 1.00th=[13435], 5.00th=[15795], 10.00th=[16188], 20.00th=[16581], 00:09:19.943 | 30.00th=[16909], 40.00th=[16909], 50.00th=[17171], 60.00th=[17433], 00:09:19.943 | 70.00th=[17695], 80.00th=[17957], 90.00th=[18220], 95.00th=[18220], 00:09:19.943 | 99.00th=[18744], 99.50th=[18744], 99.90th=[18744], 99.95th=[18744], 00:09:19.943 | 99.99th=[18744] 00:09:19.943 write: IOPS=3829, BW=15.0MiB/s (15.7MB/s)(15.0MiB/1003msec); 0 zone resets 00:09:19.943 slat (usec): min=11, max=4358, avg=131.81, stdev=597.89 00:09:19.943 clat (usec): min=292, max=19418, avg=16944.25, stdev=1909.35 00:09:19.943 lat (usec): min=3724, max=19442, avg=17076.06, stdev=1816.45 00:09:19.943 clat percentiles (usec): 00:09:19.943 | 1.00th=[ 8094], 5.00th=[14353], 10.00th=[15533], 20.00th=[16188], 00:09:19.943 | 30.00th=[16450], 40.00th=[16909], 50.00th=[17171], 60.00th=[17433], 00:09:19.943 | 70.00th=[17957], 80.00th=[18220], 90.00th=[18744], 95.00th=[19006], 00:09:19.943 | 99.00th=[19268], 99.50th=[19268], 99.90th=[19530], 99.95th=[19530], 00:09:19.943 | 99.99th=[19530] 00:09:19.943 bw ( KiB/s): min=13568, max=16168, per=25.57%, avg=14868.00, stdev=1838.48, samples=2 00:09:19.943 iops : min= 3392, max= 4042, avg=3717.00, stdev=459.62, samples=2 00:09:19.943 lat (usec) : 500=0.01% 00:09:19.943 lat (msec) : 4=0.15%, 10=0.71%, 20=99.12% 00:09:19.943 cpu : usr=3.99%, sys=11.28%, ctx=234, majf=0, minf=16 00:09:19.943 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:19.943 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:19.943 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:19.943 issued rwts: total=3584,3841,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:19.943 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:19.943 job3: (groupid=0, jobs=1): err= 0: pid=66532: Fri Dec 6 13:48:18 2024 00:09:19.943 read: IOPS=2230, BW=8920KiB/s (9134kB/s)(8956KiB/1004msec) 00:09:19.943 slat (usec): min=12, max=8381, avg=213.95, stdev=885.44 00:09:19.943 clat (usec): min=935, max=35347, avg=25528.79, stdev=3888.81 00:09:19.943 lat (usec): min=5223, max=35385, avg=25742.73, stdev=3944.60 00:09:19.943 clat percentiles (usec): 00:09:19.943 | 1.00th=[ 5669], 5.00th=[20579], 10.00th=[22152], 20.00th=[23987], 00:09:19.943 | 30.00th=[25297], 40.00th=[25560], 50.00th=[25822], 60.00th=[26084], 00:09:19.943 | 70.00th=[26870], 80.00th=[27395], 90.00th=[29492], 95.00th=[31327], 00:09:19.943 | 99.00th=[33162], 99.50th=[33817], 99.90th=[34341], 99.95th=[34341], 00:09:19.943 | 99.99th=[35390] 00:09:19.943 write: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec); 0 zone resets 00:09:19.943 slat (usec): min=10, max=8705, avg=195.05, stdev=789.94 00:09:19.943 clat (usec): min=18508, max=35763, avg=26984.11, stdev=2291.99 00:09:19.943 lat (usec): min=18553, max=35805, avg=27179.16, stdev=2367.59 00:09:19.943 clat percentiles (usec): 00:09:19.943 | 1.00th=[21365], 5.00th=[24249], 10.00th=[25035], 20.00th=[25560], 00:09:19.943 | 30.00th=[25822], 40.00th=[26346], 50.00th=[26870], 60.00th=[27132], 00:09:19.943 | 70.00th=[27395], 80.00th=[27657], 90.00th=[30016], 95.00th=[32113], 00:09:19.943 | 99.00th=[34341], 99.50th=[34341], 99.90th=[34866], 99.95th=[34866], 00:09:19.943 | 99.99th=[35914] 00:09:19.943 bw ( KiB/s): min=10176, max=10324, per=17.63%, avg=10250.00, stdev=104.65, samples=2 00:09:19.943 iops : min= 2544, max= 2581, avg=2562.50, stdev=26.16, samples=2 00:09:19.943 lat (usec) : 1000=0.02% 00:09:19.943 lat (msec) : 10=0.85%, 20=1.23%, 50=97.90% 00:09:19.943 cpu : usr=2.89%, sys=8.57%, ctx=324, majf=0, minf=11 00:09:19.943 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:09:19.944 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:19.944 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:19.944 issued rwts: total=2239,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:19.944 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:19.944 00:09:19.944 Run status group 0 (all jobs): 00:09:19.944 READ: bw=54.1MiB/s (56.8MB/s), 8920KiB/s-16.0MiB/s (9134kB/s-16.7MB/s), io=54.4MiB (57.0MB), run=1002-1004msec 00:09:19.944 WRITE: bw=56.8MiB/s (59.5MB/s), 9.96MiB/s-16.0MiB/s (10.4MB/s-16.8MB/s), io=57.0MiB (59.8MB), run=1002-1004msec 00:09:19.944 00:09:19.944 Disk stats (read/write): 00:09:19.944 nvme0n1: ios=3410/3584, merge=0/0, ticks=12020/12216, in_queue=24236, util=89.28% 00:09:19.944 nvme0n2: ios=3454/3584, merge=0/0, ticks=26067/23976, in_queue=50043, util=89.69% 00:09:19.944 nvme0n3: ios=3093/3296, merge=0/0, ticks=12206/12601, in_queue=24807, util=89.69% 00:09:19.944 nvme0n4: ios=2048/2087, merge=0/0, ticks=17629/16854, in_queue=34483, util=89.64% 00:09:19.944 13:48:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:19.944 13:48:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=66545 00:09:19.944 13:48:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:19.944 13:48:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:19.944 [global] 00:09:19.944 thread=1 00:09:19.944 invalidate=1 00:09:19.944 rw=read 00:09:19.944 time_based=1 00:09:19.944 runtime=10 00:09:19.944 ioengine=libaio 00:09:19.944 direct=1 00:09:19.944 bs=4096 00:09:19.944 iodepth=1 00:09:19.944 norandommap=1 00:09:19.944 numjobs=1 00:09:19.944 00:09:19.944 [job0] 00:09:19.944 filename=/dev/nvme0n1 00:09:19.944 [job1] 00:09:19.944 filename=/dev/nvme0n2 00:09:19.944 [job2] 00:09:19.944 filename=/dev/nvme0n3 00:09:19.944 [job3] 00:09:19.944 filename=/dev/nvme0n4 00:09:19.944 Could not set queue depth (nvme0n1) 00:09:19.944 Could not set queue depth (nvme0n2) 00:09:19.944 Could not set queue depth (nvme0n3) 00:09:19.944 Could not set queue depth (nvme0n4) 00:09:19.944 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:19.944 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:19.944 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:19.944 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:19.944 fio-3.35 00:09:19.944 Starting 4 threads 00:09:23.232 13:48:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:23.232 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=32489472, buflen=4096 00:09:23.232 fio: pid=66590, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:23.232 13:48:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:23.232 fio: pid=66588, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:23.232 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=37851136, buflen=4096 00:09:23.232 13:48:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:23.232 13:48:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:23.491 fio: pid=66585, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:23.491 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=40361984, buflen=4096 00:09:23.491 13:48:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:23.491 13:48:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:23.750 fio: pid=66586, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:23.750 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=9801728, buflen=4096 00:09:23.750 00:09:23.750 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66585: Fri Dec 6 13:48:23 2024 00:09:23.750 read: IOPS=2879, BW=11.2MiB/s (11.8MB/s)(38.5MiB/3423msec) 00:09:23.750 slat (usec): min=7, max=13867, avg=22.24, stdev=224.09 00:09:23.750 clat (usec): min=123, max=3904, avg=323.45, stdev=101.98 00:09:23.750 lat (usec): min=137, max=14103, avg=345.69, stdev=245.16 00:09:23.750 clat percentiles (usec): 00:09:23.750 | 1.00th=[ 145], 5.00th=[ 167], 10.00th=[ 204], 20.00th=[ 273], 00:09:23.750 | 30.00th=[ 306], 40.00th=[ 322], 50.00th=[ 334], 60.00th=[ 347], 00:09:23.750 | 70.00th=[ 359], 80.00th=[ 371], 90.00th=[ 392], 95.00th=[ 412], 00:09:23.750 | 99.00th=[ 469], 99.50th=[ 545], 99.90th=[ 1074], 99.95th=[ 2311], 00:09:23.750 | 99.99th=[ 3916] 00:09:23.750 bw ( KiB/s): min=10488, max=11696, per=21.97%, avg=10892.00, stdev=462.67, samples=6 00:09:23.750 iops : min= 2622, max= 2924, avg=2723.00, stdev=115.67, samples=6 00:09:23.750 lat (usec) : 250=14.90%, 500=84.40%, 750=0.45%, 1000=0.11% 00:09:23.750 lat (msec) : 2=0.08%, 4=0.05% 00:09:23.750 cpu : usr=0.94%, sys=4.97%, ctx=9860, majf=0, minf=1 00:09:23.750 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:23.750 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:23.750 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:23.750 issued rwts: total=9855,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:23.750 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:23.750 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66586: Fri Dec 6 13:48:23 2024 00:09:23.750 read: IOPS=5082, BW=19.8MiB/s (20.8MB/s)(73.3MiB/3695msec) 00:09:23.750 slat (usec): min=10, max=12812, avg=16.11, stdev=154.66 00:09:23.750 clat (usec): min=118, max=40427, avg=179.50, stdev=296.17 00:09:23.750 lat (usec): min=130, max=40441, avg=195.60, stdev=334.42 00:09:23.750 clat percentiles (usec): 00:09:23.750 | 1.00th=[ 133], 5.00th=[ 141], 10.00th=[ 147], 20.00th=[ 155], 00:09:23.750 | 30.00th=[ 161], 40.00th=[ 167], 50.00th=[ 174], 60.00th=[ 180], 00:09:23.750 | 70.00th=[ 188], 80.00th=[ 198], 90.00th=[ 212], 95.00th=[ 227], 00:09:23.750 | 99.00th=[ 258], 99.50th=[ 273], 99.90th=[ 392], 99.95th=[ 619], 00:09:23.750 | 99.99th=[ 2212] 00:09:23.750 bw ( KiB/s): min=18816, max=21776, per=41.05%, avg=20353.43, stdev=1246.62, samples=7 00:09:23.750 iops : min= 4704, max= 5444, avg=5088.29, stdev=311.74, samples=7 00:09:23.750 lat (usec) : 250=98.58%, 500=1.35%, 750=0.03%, 1000=0.01% 00:09:23.750 lat (msec) : 2=0.02%, 4=0.01%, 50=0.01% 00:09:23.750 cpu : usr=1.60%, sys=5.85%, ctx=18784, majf=0, minf=1 00:09:23.750 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:23.750 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:23.750 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:23.750 issued rwts: total=18778,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:23.750 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:23.750 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66588: Fri Dec 6 13:48:23 2024 00:09:23.750 read: IOPS=2895, BW=11.3MiB/s (11.9MB/s)(36.1MiB/3192msec) 00:09:23.750 slat (usec): min=8, max=8904, avg=18.98, stdev=123.14 00:09:23.750 clat (usec): min=85, max=7232, avg=324.61, stdev=118.32 00:09:23.750 lat (usec): min=159, max=9135, avg=343.59, stdev=169.81 00:09:23.750 clat percentiles (usec): 00:09:23.750 | 1.00th=[ 217], 5.00th=[ 241], 10.00th=[ 253], 20.00th=[ 269], 00:09:23.750 | 30.00th=[ 285], 40.00th=[ 306], 50.00th=[ 322], 60.00th=[ 338], 00:09:23.750 | 70.00th=[ 351], 80.00th=[ 367], 90.00th=[ 388], 95.00th=[ 408], 00:09:23.750 | 99.00th=[ 457], 99.50th=[ 502], 99.90th=[ 1188], 99.95th=[ 2933], 00:09:23.750 | 99.99th=[ 7242] 00:09:23.750 bw ( KiB/s): min=10496, max=13312, per=23.16%, avg=11485.33, stdev=1309.86, samples=6 00:09:23.750 iops : min= 2624, max= 3328, avg=2871.33, stdev=327.46, samples=6 00:09:23.750 lat (usec) : 100=0.01%, 250=8.47%, 500=91.00%, 750=0.27%, 1000=0.10% 00:09:23.750 lat (msec) : 2=0.05%, 4=0.08%, 10=0.01% 00:09:23.750 cpu : usr=0.85%, sys=4.73%, ctx=9247, majf=0, minf=1 00:09:23.750 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:23.750 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:23.750 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:23.750 issued rwts: total=9242,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:23.750 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:23.750 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66590: Fri Dec 6 13:48:23 2024 00:09:23.750 read: IOPS=2703, BW=10.6MiB/s (11.1MB/s)(31.0MiB/2934msec) 00:09:23.750 slat (nsec): min=16867, max=82320, avg=25312.62, stdev=6035.95 00:09:23.750 clat (usec): min=168, max=1502, avg=341.80, stdev=58.91 00:09:23.750 lat (usec): min=194, max=1532, avg=367.12, stdev=59.38 00:09:23.750 clat percentiles (usec): 00:09:23.750 | 1.00th=[ 215], 5.00th=[ 265], 10.00th=[ 285], 20.00th=[ 306], 00:09:23.750 | 30.00th=[ 318], 40.00th=[ 326], 50.00th=[ 338], 60.00th=[ 347], 00:09:23.750 | 70.00th=[ 359], 80.00th=[ 371], 90.00th=[ 400], 95.00th=[ 424], 00:09:23.750 | 99.00th=[ 570], 99.50th=[ 603], 99.90th=[ 685], 99.95th=[ 865], 00:09:23.750 | 99.99th=[ 1500] 00:09:23.750 bw ( KiB/s): min=10464, max=11616, per=21.99%, avg=10904.00, stdev=502.22, samples=5 00:09:23.750 iops : min= 2616, max= 2904, avg=2726.00, stdev=125.55, samples=5 00:09:23.750 lat (usec) : 250=2.79%, 500=94.81%, 750=2.33%, 1000=0.05% 00:09:23.750 lat (msec) : 2=0.01% 00:09:23.750 cpu : usr=1.13%, sys=6.24%, ctx=7934, majf=0, minf=2 00:09:23.750 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:23.750 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:23.750 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:23.750 issued rwts: total=7933,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:23.750 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:23.750 00:09:23.750 Run status group 0 (all jobs): 00:09:23.750 READ: bw=48.4MiB/s (50.8MB/s), 10.6MiB/s-19.8MiB/s (11.1MB/s-20.8MB/s), io=179MiB (188MB), run=2934-3695msec 00:09:23.750 00:09:23.750 Disk stats (read/write): 00:09:23.750 nvme0n1: ios=9560/0, merge=0/0, ticks=3080/0, in_queue=3080, util=95.05% 00:09:23.750 nvme0n2: ios=18350/0, merge=0/0, ticks=3375/0, in_queue=3375, util=95.61% 00:09:23.750 nvme0n3: ios=8985/0, merge=0/0, ticks=2849/0, in_queue=2849, util=96.02% 00:09:23.750 nvme0n4: ios=7759/0, merge=0/0, ticks=2680/0, in_queue=2680, util=96.69% 00:09:23.750 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:23.750 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:24.317 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:24.317 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:24.575 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:24.575 13:48:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:24.833 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:24.833 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:25.091 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:25.091 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:25.350 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:25.350 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 66545 00:09:25.350 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:25.350 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:25.350 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:25.350 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:25.350 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:09:25.350 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:25.350 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:25.350 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:25.350 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:25.350 nvmf hotplug test: fio failed as expected 00:09:25.350 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:09:25.350 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:25.350 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:25.350 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:25.609 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:25.609 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:25.609 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:25.609 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:25.609 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:25.609 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:25.609 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:09:25.609 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:25.609 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:09:25.609 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:25.609 13:48:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:25.609 rmmod nvme_tcp 00:09:25.609 rmmod nvme_fabrics 00:09:25.866 rmmod nvme_keyring 00:09:25.866 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:25.866 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:09:25.866 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:09:25.866 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 66158 ']' 00:09:25.866 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 66158 00:09:25.866 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 66158 ']' 00:09:25.866 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 66158 00:09:25.866 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:09:25.866 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:25.866 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66158 00:09:25.866 killing process with pid 66158 00:09:25.866 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:25.866 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:25.866 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66158' 00:09:25.866 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 66158 00:09:25.866 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 66158 00:09:26.124 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:26.124 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:26.124 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:26.124 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:09:26.124 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:09:26.124 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:26.124 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:09:26.124 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:26.124 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:26.124 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:26.124 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:26.124 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:26.124 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:26.124 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:26.124 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:26.124 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:26.124 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:26.124 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:26.124 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:26.124 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:26.124 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:26.382 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:26.382 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:26.382 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:26.382 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:26.382 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:26.382 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:09:26.382 00:09:26.382 real 0m20.276s 00:09:26.382 user 1m16.560s 00:09:26.382 sys 0m9.552s 00:09:26.382 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:26.382 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:26.382 ************************************ 00:09:26.382 END TEST nvmf_fio_target 00:09:26.382 ************************************ 00:09:26.382 13:48:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:26.382 13:48:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:26.382 13:48:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:26.382 13:48:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:26.382 ************************************ 00:09:26.382 START TEST nvmf_bdevio 00:09:26.382 ************************************ 00:09:26.382 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:26.382 * Looking for test storage... 00:09:26.382 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:26.382 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:26.382 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:09:26.382 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:26.641 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:26.641 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:26.641 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:26.641 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:26.641 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:09:26.641 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:09:26.641 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:09:26.641 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:09:26.641 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:09:26.641 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:09:26.641 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:09:26.641 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:26.641 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:09:26.641 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:09:26.641 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:26.641 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:26.641 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:09:26.641 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:09:26.641 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:26.641 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:09:26.641 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:09:26.641 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:09:26.641 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:09:26.641 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:26.641 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:09:26.641 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:09:26.641 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:26.641 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:26.641 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:09:26.641 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:26.641 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:26.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.641 --rc genhtml_branch_coverage=1 00:09:26.641 --rc genhtml_function_coverage=1 00:09:26.641 --rc genhtml_legend=1 00:09:26.641 --rc geninfo_all_blocks=1 00:09:26.641 --rc geninfo_unexecuted_blocks=1 00:09:26.641 00:09:26.641 ' 00:09:26.641 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:26.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.641 --rc genhtml_branch_coverage=1 00:09:26.641 --rc genhtml_function_coverage=1 00:09:26.641 --rc genhtml_legend=1 00:09:26.641 --rc geninfo_all_blocks=1 00:09:26.641 --rc geninfo_unexecuted_blocks=1 00:09:26.641 00:09:26.641 ' 00:09:26.641 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:26.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.642 --rc genhtml_branch_coverage=1 00:09:26.642 --rc genhtml_function_coverage=1 00:09:26.642 --rc genhtml_legend=1 00:09:26.642 --rc geninfo_all_blocks=1 00:09:26.642 --rc geninfo_unexecuted_blocks=1 00:09:26.642 00:09:26.642 ' 00:09:26.642 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:26.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.642 --rc genhtml_branch_coverage=1 00:09:26.642 --rc genhtml_function_coverage=1 00:09:26.642 --rc genhtml_legend=1 00:09:26.642 --rc geninfo_all_blocks=1 00:09:26.642 --rc geninfo_unexecuted_blocks=1 00:09:26.642 00:09:26.642 ' 00:09:26.642 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:26.642 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:26.642 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:26.642 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:26.642 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:26.642 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:26.642 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:26.642 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:26.642 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:26.642 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:26.642 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:26.642 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:26.642 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:09:26.642 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=cfa2def7-c8af-457f-82a0-b312efdea7f4 00:09:26.642 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:26.642 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:26.642 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:26.642 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:26.642 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:26.642 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:09:26.642 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:26.642 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:26.642 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:26.642 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.642 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.642 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.642 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:26.642 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.642 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:09:26.642 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:26.642 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:26.642 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:26.642 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:26.642 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:26.642 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:26.642 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:26.642 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:26.642 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:26.642 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:26.642 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:26.642 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:26.642 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:26.642 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:26.642 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:26.642 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:26.642 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:26.642 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:26.642 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:26.642 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:26.642 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:26.642 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:26.642 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:26.642 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:26.642 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:26.642 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:26.642 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:26.642 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:26.642 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:26.642 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:26.642 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:26.642 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:26.642 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:26.642 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:26.642 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:26.642 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:26.642 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:26.642 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:26.642 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:26.642 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:26.642 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:26.642 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:26.642 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:26.642 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:26.642 Cannot find device "nvmf_init_br" 00:09:26.642 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:09:26.642 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:26.642 Cannot find device "nvmf_init_br2" 00:09:26.642 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:09:26.642 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:26.642 Cannot find device "nvmf_tgt_br" 00:09:26.642 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:09:26.642 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:26.642 Cannot find device "nvmf_tgt_br2" 00:09:26.642 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:09:26.642 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:26.642 Cannot find device "nvmf_init_br" 00:09:26.642 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:09:26.642 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:26.642 Cannot find device "nvmf_init_br2" 00:09:26.642 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:09:26.642 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:26.642 Cannot find device "nvmf_tgt_br" 00:09:26.642 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:09:26.643 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:26.643 Cannot find device "nvmf_tgt_br2" 00:09:26.643 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:09:26.643 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:26.643 Cannot find device "nvmf_br" 00:09:26.643 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:09:26.643 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:26.643 Cannot find device "nvmf_init_if" 00:09:26.643 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:09:26.643 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:26.643 Cannot find device "nvmf_init_if2" 00:09:26.643 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:09:26.643 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:26.643 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:26.643 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:09:26.643 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:26.643 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:26.643 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:09:26.643 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:26.643 13:48:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:26.643 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:26.643 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:26.643 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:26.643 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:26.643 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:26.902 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:26.902 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:26.902 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:26.902 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:26.902 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:26.902 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:26.902 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:26.902 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:26.902 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:26.902 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:26.902 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:26.902 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:26.902 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:26.902 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:26.902 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:26.902 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:26.902 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:26.902 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:26.902 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:26.902 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:26.902 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:26.902 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:26.902 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:26.902 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:26.902 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:26.902 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:26.902 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:26.902 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 00:09:26.902 00:09:26.902 --- 10.0.0.3 ping statistics --- 00:09:26.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:26.903 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:09:26.903 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:26.903 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:26.903 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.066 ms 00:09:26.903 00:09:26.903 --- 10.0.0.4 ping statistics --- 00:09:26.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:26.903 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:09:26.903 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:26.903 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:26.903 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:09:26.903 00:09:26.903 --- 10.0.0.1 ping statistics --- 00:09:26.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:26.903 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:09:26.903 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:26.903 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:26.903 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.104 ms 00:09:26.903 00:09:26.903 --- 10.0.0.2 ping statistics --- 00:09:26.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:26.903 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:09:26.903 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:26.903 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@461 -- # return 0 00:09:26.903 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:26.903 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:26.903 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:26.903 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:26.903 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:26.903 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:26.903 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:26.903 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:26.903 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:26.903 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:26.903 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:26.903 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=66925 00:09:26.903 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:26.903 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 66925 00:09:26.903 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 66925 ']' 00:09:26.903 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:26.903 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:26.903 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:26.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:26.903 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:26.903 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:27.162 [2024-12-06 13:48:26.346308] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:09:27.162 [2024-12-06 13:48:26.346412] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:27.162 [2024-12-06 13:48:26.495254] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:27.162 [2024-12-06 13:48:26.549519] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:27.162 [2024-12-06 13:48:26.549571] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:27.162 [2024-12-06 13:48:26.549597] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:27.162 [2024-12-06 13:48:26.549605] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:27.162 [2024-12-06 13:48:26.549611] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:27.162 [2024-12-06 13:48:26.551208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:27.162 [2024-12-06 13:48:26.551282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:27.162 [2024-12-06 13:48:26.551385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:27.162 [2024-12-06 13:48:26.551393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:27.421 [2024-12-06 13:48:26.622502] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:27.421 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:27.421 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:09:27.421 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:27.421 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:27.421 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:27.421 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:27.421 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:27.421 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.421 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:27.421 [2024-12-06 13:48:26.750900] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:27.421 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.421 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:27.421 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.421 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:27.421 Malloc0 00:09:27.421 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.421 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:27.421 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.421 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:27.421 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.421 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:27.421 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.421 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:27.679 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.679 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:27.679 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.679 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:27.679 [2024-12-06 13:48:26.833573] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:27.679 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.679 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:09:27.679 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:27.679 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:09:27.679 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:09:27.679 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:27.679 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:27.679 { 00:09:27.679 "params": { 00:09:27.679 "name": "Nvme$subsystem", 00:09:27.679 "trtype": "$TEST_TRANSPORT", 00:09:27.679 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:27.679 "adrfam": "ipv4", 00:09:27.679 "trsvcid": "$NVMF_PORT", 00:09:27.680 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:27.680 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:27.680 "hdgst": ${hdgst:-false}, 00:09:27.680 "ddgst": ${ddgst:-false} 00:09:27.680 }, 00:09:27.680 "method": "bdev_nvme_attach_controller" 00:09:27.680 } 00:09:27.680 EOF 00:09:27.680 )") 00:09:27.680 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:09:27.680 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:09:27.680 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:09:27.680 13:48:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:27.680 "params": { 00:09:27.680 "name": "Nvme1", 00:09:27.680 "trtype": "tcp", 00:09:27.680 "traddr": "10.0.0.3", 00:09:27.680 "adrfam": "ipv4", 00:09:27.680 "trsvcid": "4420", 00:09:27.680 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:27.680 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:27.680 "hdgst": false, 00:09:27.680 "ddgst": false 00:09:27.680 }, 00:09:27.680 "method": "bdev_nvme_attach_controller" 00:09:27.680 }' 00:09:27.680 [2024-12-06 13:48:26.907133] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:09:27.680 [2024-12-06 13:48:26.907225] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66954 ] 00:09:27.680 [2024-12-06 13:48:27.061192] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:27.938 [2024-12-06 13:48:27.127702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:27.938 [2024-12-06 13:48:27.127855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:27.938 [2024-12-06 13:48:27.128258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.938 [2024-12-06 13:48:27.215641] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:28.196 I/O targets: 00:09:28.196 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:28.196 00:09:28.196 00:09:28.196 CUnit - A unit testing framework for C - Version 2.1-3 00:09:28.196 http://cunit.sourceforge.net/ 00:09:28.196 00:09:28.196 00:09:28.196 Suite: bdevio tests on: Nvme1n1 00:09:28.196 Test: blockdev write read block ...passed 00:09:28.196 Test: blockdev write zeroes read block ...passed 00:09:28.196 Test: blockdev write zeroes read no split ...passed 00:09:28.196 Test: blockdev write zeroes read split ...passed 00:09:28.196 Test: blockdev write zeroes read split partial ...passed 00:09:28.196 Test: blockdev reset ...[2024-12-06 13:48:27.384139] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:09:28.196 [2024-12-06 13:48:27.384246] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1916b30 (9): Bad file descriptor 00:09:28.196 passed 00:09:28.196 Test: blockdev write read 8 blocks ...[2024-12-06 13:48:27.400203] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:09:28.196 passed 00:09:28.196 Test: blockdev write read size > 128k ...passed 00:09:28.196 Test: blockdev write read invalid size ...passed 00:09:28.196 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:28.196 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:28.196 Test: blockdev write read max offset ...passed 00:09:28.196 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:28.197 Test: blockdev writev readv 8 blocks ...passed 00:09:28.197 Test: blockdev writev readv 30 x 1block ...passed 00:09:28.197 Test: blockdev writev readv block ...passed 00:09:28.197 Test: blockdev writev readv size > 128k ...passed 00:09:28.197 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:28.197 Test: blockdev comparev and writev ...[2024-12-06 13:48:27.408679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:28.197 [2024-12-06 13:48:27.408729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:09:28.197 [2024-12-06 13:48:27.408747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:28.197 [2024-12-06 13:48:27.408756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:09:28.197 passed 00:09:28.197 Test: blockdev nvme passthru rw ...[2024-12-06 13:48:27.409208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:28.197 [2024-12-06 13:48:27.409230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:09:28.197 [2024-12-06 13:48:27.409245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:28.197 [2024-12-06 13:48:27.409254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:09:28.197 [2024-12-06 13:48:27.409586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:28.197 [2024-12-06 13:48:27.409601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:09:28.197 [2024-12-06 13:48:27.409615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:28.197 [2024-12-06 13:48:27.409624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:09:28.197 [2024-12-06 13:48:27.409892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:28.197 [2024-12-06 13:48:27.409907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:09:28.197 [2024-12-06 13:48:27.409921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:28.197 [2024-12-06 13:48:27.409930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:09:28.197 passed 00:09:28.197 Test: blockdev nvme passthru vendor specific ...passed 00:09:28.197 Test: blockdev nvme admin passthru ...[2024-12-06 13:48:27.410794] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:28.197 [2024-12-06 13:48:27.410817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:09:28.197 [2024-12-06 13:48:27.410931] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:28.197 [2024-12-06 13:48:27.410946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:09:28.197 [2024-12-06 13:48:27.411072] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:28.197 [2024-12-06 13:48:27.411087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:09:28.197 [2024-12-06 13:48:27.411256] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:28.197 [2024-12-06 13:48:27.411272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:09:28.197 passed 00:09:28.197 Test: blockdev copy ...passed 00:09:28.197 00:09:28.197 Run Summary: Type Total Ran Passed Failed Inactive 00:09:28.197 suites 1 1 n/a 0 0 00:09:28.197 tests 23 23 23 0 0 00:09:28.197 asserts 152 152 152 0 n/a 00:09:28.197 00:09:28.197 Elapsed time = 0.148 seconds 00:09:28.455 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:28.455 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.455 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:28.455 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.455 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:09:28.455 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:09:28.455 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:28.455 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:09:28.455 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:28.455 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:09:28.455 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:28.456 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:28.456 rmmod nvme_tcp 00:09:28.456 rmmod nvme_fabrics 00:09:28.456 rmmod nvme_keyring 00:09:28.456 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:28.456 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:09:28.456 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:09:28.456 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 66925 ']' 00:09:28.456 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 66925 00:09:28.456 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 66925 ']' 00:09:28.456 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 66925 00:09:28.456 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:09:28.456 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:28.456 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66925 00:09:28.456 killing process with pid 66925 00:09:28.456 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:09:28.456 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:09:28.456 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66925' 00:09:28.456 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 66925 00:09:28.456 13:48:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 66925 00:09:28.715 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:28.715 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:28.715 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:28.715 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:09:28.715 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:09:28.715 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:28.715 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:09:28.974 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:28.974 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:28.974 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:28.974 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:28.974 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:28.974 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:28.974 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:28.974 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:28.974 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:28.974 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:28.974 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:28.974 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:28.974 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:28.974 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:28.974 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:28.974 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:28.974 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:28.974 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:28.974 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:28.974 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:09:28.974 00:09:28.974 real 0m2.717s 00:09:28.974 user 0m7.638s 00:09:28.974 sys 0m0.965s 00:09:28.974 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:28.974 ************************************ 00:09:28.974 END TEST nvmf_bdevio 00:09:28.974 ************************************ 00:09:28.974 13:48:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:29.233 13:48:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:29.233 00:09:29.233 real 2m36.147s 00:09:29.233 user 6m49.130s 00:09:29.233 sys 0m51.524s 00:09:29.233 13:48:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:29.233 13:48:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:29.233 ************************************ 00:09:29.233 END TEST nvmf_target_core 00:09:29.233 ************************************ 00:09:29.233 13:48:28 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:29.233 13:48:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:29.233 13:48:28 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:29.233 13:48:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:29.233 ************************************ 00:09:29.233 START TEST nvmf_target_extra 00:09:29.233 ************************************ 00:09:29.233 13:48:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:29.233 * Looking for test storage... 00:09:29.233 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:09:29.233 13:48:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:29.233 13:48:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:09:29.233 13:48:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:29.493 13:48:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:29.493 13:48:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:29.493 13:48:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:29.493 13:48:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:29.493 13:48:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:09:29.493 13:48:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:09:29.493 13:48:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:09:29.493 13:48:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:09:29.493 13:48:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:09:29.493 13:48:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:09:29.493 13:48:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:09:29.493 13:48:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:29.493 13:48:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:09:29.493 13:48:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:09:29.493 13:48:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:29.493 13:48:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:29.493 13:48:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:09:29.493 13:48:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:09:29.493 13:48:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:29.493 13:48:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:09:29.493 13:48:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:09:29.493 13:48:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:09:29.493 13:48:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:09:29.493 13:48:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:29.493 13:48:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:09:29.493 13:48:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:09:29.493 13:48:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:29.493 13:48:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:29.493 13:48:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:09:29.493 13:48:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:29.493 13:48:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:29.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.493 --rc genhtml_branch_coverage=1 00:09:29.493 --rc genhtml_function_coverage=1 00:09:29.493 --rc genhtml_legend=1 00:09:29.493 --rc geninfo_all_blocks=1 00:09:29.493 --rc geninfo_unexecuted_blocks=1 00:09:29.493 00:09:29.493 ' 00:09:29.493 13:48:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:29.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.493 --rc genhtml_branch_coverage=1 00:09:29.493 --rc genhtml_function_coverage=1 00:09:29.493 --rc genhtml_legend=1 00:09:29.493 --rc geninfo_all_blocks=1 00:09:29.493 --rc geninfo_unexecuted_blocks=1 00:09:29.493 00:09:29.493 ' 00:09:29.493 13:48:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:29.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.493 --rc genhtml_branch_coverage=1 00:09:29.493 --rc genhtml_function_coverage=1 00:09:29.493 --rc genhtml_legend=1 00:09:29.493 --rc geninfo_all_blocks=1 00:09:29.493 --rc geninfo_unexecuted_blocks=1 00:09:29.493 00:09:29.493 ' 00:09:29.493 13:48:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:29.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.493 --rc genhtml_branch_coverage=1 00:09:29.493 --rc genhtml_function_coverage=1 00:09:29.493 --rc genhtml_legend=1 00:09:29.493 --rc geninfo_all_blocks=1 00:09:29.493 --rc geninfo_unexecuted_blocks=1 00:09:29.493 00:09:29.493 ' 00:09:29.493 13:48:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:29.493 13:48:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:09:29.493 13:48:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:29.493 13:48:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:29.493 13:48:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:29.493 13:48:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:29.493 13:48:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:29.493 13:48:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:29.493 13:48:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:29.493 13:48:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:29.493 13:48:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:29.493 13:48:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:29.493 13:48:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:09:29.493 13:48:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=cfa2def7-c8af-457f-82a0-b312efdea7f4 00:09:29.493 13:48:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:29.493 13:48:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:29.493 13:48:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:29.493 13:48:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:29.493 13:48:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:29.493 13:48:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:09:29.493 13:48:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:29.493 13:48:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:29.493 13:48:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:29.494 13:48:28 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.494 13:48:28 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.494 13:48:28 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.494 13:48:28 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:09:29.494 13:48:28 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.494 13:48:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:09:29.494 13:48:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:29.494 13:48:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:29.494 13:48:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:29.494 13:48:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:29.494 13:48:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:29.494 13:48:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:29.494 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:29.494 13:48:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:29.494 13:48:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:29.494 13:48:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:29.494 13:48:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:29.494 13:48:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:09:29.494 13:48:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 1 -eq 0 ]] 00:09:29.494 13:48:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:09:29.494 13:48:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:29.494 13:48:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:29.494 13:48:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:29.494 ************************************ 00:09:29.494 START TEST nvmf_auth_target 00:09:29.494 ************************************ 00:09:29.494 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:09:29.494 * Looking for test storage... 00:09:29.494 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:29.494 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:29.494 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:29.494 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:09:29.494 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:29.494 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:29.494 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:29.494 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:29.494 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:29.494 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:29.494 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:29.494 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:29.494 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:29.494 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:29.494 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:29.494 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:29.494 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:09:29.494 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:09:29.494 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:29.494 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:29.494 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:09:29.494 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:09:29.494 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:29.754 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:09:29.754 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:29.754 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:09:29.754 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:09:29.754 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:29.754 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:09:29.754 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:29.754 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:29.754 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:29.754 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:09:29.754 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:29.754 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:29.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.754 --rc genhtml_branch_coverage=1 00:09:29.754 --rc genhtml_function_coverage=1 00:09:29.754 --rc genhtml_legend=1 00:09:29.754 --rc geninfo_all_blocks=1 00:09:29.754 --rc geninfo_unexecuted_blocks=1 00:09:29.754 00:09:29.754 ' 00:09:29.754 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:29.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.754 --rc genhtml_branch_coverage=1 00:09:29.754 --rc genhtml_function_coverage=1 00:09:29.754 --rc genhtml_legend=1 00:09:29.754 --rc geninfo_all_blocks=1 00:09:29.754 --rc geninfo_unexecuted_blocks=1 00:09:29.754 00:09:29.754 ' 00:09:29.754 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:29.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.754 --rc genhtml_branch_coverage=1 00:09:29.754 --rc genhtml_function_coverage=1 00:09:29.755 --rc genhtml_legend=1 00:09:29.755 --rc geninfo_all_blocks=1 00:09:29.755 --rc geninfo_unexecuted_blocks=1 00:09:29.755 00:09:29.755 ' 00:09:29.755 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:29.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.755 --rc genhtml_branch_coverage=1 00:09:29.755 --rc genhtml_function_coverage=1 00:09:29.755 --rc genhtml_legend=1 00:09:29.755 --rc geninfo_all_blocks=1 00:09:29.755 --rc geninfo_unexecuted_blocks=1 00:09:29.755 00:09:29.755 ' 00:09:29.755 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:29.755 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:09:29.755 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:29.755 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:29.755 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:29.755 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:29.755 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:29.755 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:29.755 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:29.755 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:29.755 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:29.755 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:29.755 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:09:29.755 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=cfa2def7-c8af-457f-82a0-b312efdea7f4 00:09:29.755 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:29.755 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:29.755 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:29.755 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:29.755 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:29.755 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:29.755 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:29.755 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:29.755 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:29.755 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.755 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.755 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.755 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:09:29.755 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.755 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:09:29.755 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:29.755 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:29.755 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:29.755 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:29.755 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:29.755 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:29.755 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:29.755 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:29.755 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:29.755 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:29.755 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:09:29.755 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:09:29.755 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:09:29.755 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:09:29.755 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:09:29.755 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:09:29.755 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:09:29.755 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:09:29.755 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:29.755 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:29.755 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:29.755 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:29.755 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:29.755 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:29.755 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:29.755 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:29.755 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:29.755 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:29.755 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:29.755 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:29.755 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:29.755 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:29.755 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:29.755 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:29.755 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:29.755 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:29.755 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:29.755 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:29.755 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:29.755 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:29.755 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:29.755 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:29.755 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:29.755 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:29.755 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:29.755 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:29.755 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:29.755 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:29.755 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:29.755 Cannot find device "nvmf_init_br" 00:09:29.755 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:09:29.755 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:29.755 Cannot find device "nvmf_init_br2" 00:09:29.755 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:09:29.755 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:29.755 Cannot find device "nvmf_tgt_br" 00:09:29.755 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 00:09:29.755 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:29.756 Cannot find device "nvmf_tgt_br2" 00:09:29.756 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 00:09:29.756 13:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:29.756 Cannot find device "nvmf_init_br" 00:09:29.756 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 00:09:29.756 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:29.756 Cannot find device "nvmf_init_br2" 00:09:29.756 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 00:09:29.756 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:29.756 Cannot find device "nvmf_tgt_br" 00:09:29.756 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 00:09:29.756 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:29.756 Cannot find device "nvmf_tgt_br2" 00:09:29.756 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 00:09:29.756 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:29.756 Cannot find device "nvmf_br" 00:09:29.756 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 00:09:29.756 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:29.756 Cannot find device "nvmf_init_if" 00:09:29.756 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true 00:09:29.756 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:29.756 Cannot find device "nvmf_init_if2" 00:09:29.756 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true 00:09:29.756 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:29.756 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:29.756 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true 00:09:29.756 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:29.756 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:29.756 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true 00:09:29.756 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:29.756 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:29.756 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:29.756 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:29.756 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:29.756 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:30.015 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:30.015 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:30.015 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:30.015 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:30.015 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:30.015 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:30.015 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:30.015 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:30.015 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:30.015 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:30.015 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:30.015 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:30.015 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:30.015 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:30.015 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:30.015 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:30.015 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:30.015 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:30.015 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:30.015 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:30.015 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:30.015 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:30.015 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:30.015 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:30.015 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:30.015 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:30.015 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:30.015 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:30.015 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:09:30.015 00:09:30.015 --- 10.0.0.3 ping statistics --- 00:09:30.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:30.015 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:09:30.015 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:30.015 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:30.015 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.058 ms 00:09:30.015 00:09:30.015 --- 10.0.0.4 ping statistics --- 00:09:30.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:30.015 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:09:30.015 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:30.015 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:30.015 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:09:30.015 00:09:30.015 --- 10.0.0.1 ping statistics --- 00:09:30.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:30.015 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:09:30.015 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:30.015 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:30.015 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:09:30.015 00:09:30.015 --- 10.0.0.2 ping statistics --- 00:09:30.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:30.015 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:09:30.015 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:30.015 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@461 -- # return 0 00:09:30.015 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:30.015 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:30.015 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:30.015 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:30.015 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:30.015 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:30.015 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:30.015 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:09:30.015 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:30.015 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:30.015 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:30.015 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=67246 00:09:30.015 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 67246 00:09:30.015 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 67246 ']' 00:09:30.015 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:30.015 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:09:30.015 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:30.015 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:30.015 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:30.015 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:30.583 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:30.583 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:09:30.583 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:30.583 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:30.583 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:30.583 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:30.583 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=67271 00:09:30.583 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:09:30.583 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:09:30.583 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:09:30.583 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:09:30.583 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:30.583 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:09:30.583 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:09:30.583 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:09:30.583 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:09:30.583 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=7af7e515329bc0fe573fc67008c88497d227493da96ab7a2 00:09:30.583 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:09:30.583 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.J6I 00:09:30.583 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 7af7e515329bc0fe573fc67008c88497d227493da96ab7a2 0 00:09:30.583 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 7af7e515329bc0fe573fc67008c88497d227493da96ab7a2 0 00:09:30.583 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:09:30.583 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:09:30.583 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=7af7e515329bc0fe573fc67008c88497d227493da96ab7a2 00:09:30.583 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:09:30.583 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:09:30.583 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.J6I 00:09:30.583 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.J6I 00:09:30.583 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.J6I 00:09:30.584 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:09:30.584 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:09:30.584 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:30.584 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:09:30.584 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:09:30.584 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:09:30.584 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:09:30.584 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=6855bbbfa29134c5c54168ceb78b8334b05dfba1d48fe202f3ad115f5ad35a76 00:09:30.584 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:09:30.584 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.IVa 00:09:30.584 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 6855bbbfa29134c5c54168ceb78b8334b05dfba1d48fe202f3ad115f5ad35a76 3 00:09:30.584 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 6855bbbfa29134c5c54168ceb78b8334b05dfba1d48fe202f3ad115f5ad35a76 3 00:09:30.584 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:09:30.584 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:09:30.584 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=6855bbbfa29134c5c54168ceb78b8334b05dfba1d48fe202f3ad115f5ad35a76 00:09:30.584 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:09:30.584 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:09:30.843 13:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.IVa 00:09:30.843 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.IVa 00:09:30.843 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.IVa 00:09:30.843 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:09:30.843 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:09:30.843 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:30.843 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:09:30.843 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:09:30.843 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:09:30.843 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:09:30.843 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=f46d9ff2fe944a8bf8c6f931c9f2b4cf 00:09:30.843 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:09:30.843 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Abw 00:09:30.843 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key f46d9ff2fe944a8bf8c6f931c9f2b4cf 1 00:09:30.843 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 f46d9ff2fe944a8bf8c6f931c9f2b4cf 1 00:09:30.843 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:09:30.843 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:09:30.843 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=f46d9ff2fe944a8bf8c6f931c9f2b4cf 00:09:30.843 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:09:30.843 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:09:30.843 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Abw 00:09:30.843 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Abw 00:09:30.843 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.Abw 00:09:30.843 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:09:30.844 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:09:30.844 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:30.844 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:09:30.844 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:09:30.844 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:09:30.844 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:09:30.844 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=2d154b4dcd909b9bd82fcf8b1e0976f14fa0374582dedc7b 00:09:30.844 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:09:30.844 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.EPI 00:09:30.844 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 2d154b4dcd909b9bd82fcf8b1e0976f14fa0374582dedc7b 2 00:09:30.844 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 2d154b4dcd909b9bd82fcf8b1e0976f14fa0374582dedc7b 2 00:09:30.844 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:09:30.844 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:09:30.844 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=2d154b4dcd909b9bd82fcf8b1e0976f14fa0374582dedc7b 00:09:30.844 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:09:30.844 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:09:30.844 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.EPI 00:09:30.844 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.EPI 00:09:30.844 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.EPI 00:09:30.844 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:09:30.844 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:09:30.844 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:30.844 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:09:30.844 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:09:30.844 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:09:30.844 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:09:30.844 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=d6406d8ce2781d8f6250293c5e34a19466248a7f6e4d19c6 00:09:30.844 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:09:30.844 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.k4F 00:09:30.844 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key d6406d8ce2781d8f6250293c5e34a19466248a7f6e4d19c6 2 00:09:30.844 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 d6406d8ce2781d8f6250293c5e34a19466248a7f6e4d19c6 2 00:09:30.844 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:09:30.844 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:09:30.844 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=d6406d8ce2781d8f6250293c5e34a19466248a7f6e4d19c6 00:09:30.844 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:09:30.844 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:09:30.844 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.k4F 00:09:30.844 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.k4F 00:09:30.844 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.k4F 00:09:30.844 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:09:30.844 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:09:30.844 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:30.844 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:09:30.844 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:09:30.844 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:09:30.844 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:09:30.844 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=76992c3e3444646712b96da1e9de6d3c 00:09:30.844 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:09:30.844 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Lsg 00:09:30.844 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 76992c3e3444646712b96da1e9de6d3c 1 00:09:30.844 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 76992c3e3444646712b96da1e9de6d3c 1 00:09:30.844 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:09:30.844 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:09:30.844 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=76992c3e3444646712b96da1e9de6d3c 00:09:30.844 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:09:30.844 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:09:31.103 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Lsg 00:09:31.103 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Lsg 00:09:31.103 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.Lsg 00:09:31.103 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:09:31.103 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:09:31.103 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:31.103 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:09:31.103 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:09:31.103 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:09:31.103 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:09:31.103 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=3fdf246775d8a99492dfed3562300dcff346e41fb1b4777bf67a1a6610d8b44c 00:09:31.103 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:09:31.103 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.rcE 00:09:31.103 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 3fdf246775d8a99492dfed3562300dcff346e41fb1b4777bf67a1a6610d8b44c 3 00:09:31.103 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 3fdf246775d8a99492dfed3562300dcff346e41fb1b4777bf67a1a6610d8b44c 3 00:09:31.103 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:09:31.103 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:09:31.103 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=3fdf246775d8a99492dfed3562300dcff346e41fb1b4777bf67a1a6610d8b44c 00:09:31.103 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:09:31.103 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:09:31.103 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.rcE 00:09:31.103 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.rcE 00:09:31.103 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.rcE 00:09:31.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:31.103 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:09:31.103 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 67246 00:09:31.103 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 67246 ']' 00:09:31.103 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:31.104 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:31.104 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:31.104 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:31.104 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:31.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:09:31.363 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:31.363 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:09:31.363 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 67271 /var/tmp/host.sock 00:09:31.363 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 67271 ']' 00:09:31.363 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:09:31.363 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:31.363 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:09:31.363 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:31.363 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:31.622 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:31.622 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:09:31.622 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:09:31.622 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.622 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:31.622 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.622 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:09:31.622 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.J6I 00:09:31.622 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.622 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:31.622 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.622 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.J6I 00:09:31.622 13:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.J6I 00:09:31.881 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.IVa ]] 00:09:31.882 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.IVa 00:09:31.882 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.882 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:31.882 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.882 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.IVa 00:09:31.882 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.IVa 00:09:32.141 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:09:32.141 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Abw 00:09:32.141 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.141 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:32.141 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.141 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.Abw 00:09:32.141 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.Abw 00:09:32.400 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.EPI ]] 00:09:32.400 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.EPI 00:09:32.400 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.400 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:32.400 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.400 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.EPI 00:09:32.400 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.EPI 00:09:32.400 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:09:32.400 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.k4F 00:09:32.400 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.400 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:32.400 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.400 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.k4F 00:09:32.400 13:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.k4F 00:09:33.047 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.Lsg ]] 00:09:33.047 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Lsg 00:09:33.047 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.047 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:33.047 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.047 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Lsg 00:09:33.047 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Lsg 00:09:33.047 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:09:33.047 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.rcE 00:09:33.047 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.047 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:33.047 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.047 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.rcE 00:09:33.047 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.rcE 00:09:33.317 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:09:33.317 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:09:33.317 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:09:33.317 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:33.317 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:33.317 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:33.317 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:09:33.317 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:33.317 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:33.317 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:09:33.317 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:09:33.317 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:33.317 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:33.317 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.317 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:33.575 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.575 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:33.575 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:33.575 13:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:33.834 00:09:33.834 13:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:33.834 13:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:33.834 13:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:34.092 13:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:34.092 13:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:34.092 13:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.092 13:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:34.092 13:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.092 13:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:34.092 { 00:09:34.092 "cntlid": 1, 00:09:34.092 "qid": 0, 00:09:34.092 "state": "enabled", 00:09:34.092 "thread": "nvmf_tgt_poll_group_000", 00:09:34.092 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4", 00:09:34.092 "listen_address": { 00:09:34.092 "trtype": "TCP", 00:09:34.092 "adrfam": "IPv4", 00:09:34.092 "traddr": "10.0.0.3", 00:09:34.092 "trsvcid": "4420" 00:09:34.092 }, 00:09:34.092 "peer_address": { 00:09:34.092 "trtype": "TCP", 00:09:34.092 "adrfam": "IPv4", 00:09:34.092 "traddr": "10.0.0.1", 00:09:34.092 "trsvcid": "55994" 00:09:34.092 }, 00:09:34.092 "auth": { 00:09:34.092 "state": "completed", 00:09:34.092 "digest": "sha256", 00:09:34.092 "dhgroup": "null" 00:09:34.092 } 00:09:34.092 } 00:09:34.092 ]' 00:09:34.092 13:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:34.092 13:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:34.092 13:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:34.092 13:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:09:34.092 13:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:34.092 13:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:34.092 13:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:34.092 13:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:34.351 13:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2FmN2U1MTUzMjliYzBmZTU3M2ZjNjcwMDhjODg0OTdkMjI3NDkzZGE5NmFiN2EyQFmPpA==: --dhchap-ctrl-secret DHHC-1:03:Njg1NWJiYmZhMjkxMzRjNWM1NDE2OGNlYjc4YjgzMzRiMDVkZmJhMWQ0OGZlMjAyZjNhZDExNWY1YWQzNWE3Ntyf5n4=: 00:09:34.351 13:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --hostid cfa2def7-c8af-457f-82a0-b312efdea7f4 -l 0 --dhchap-secret DHHC-1:00:N2FmN2U1MTUzMjliYzBmZTU3M2ZjNjcwMDhjODg0OTdkMjI3NDkzZGE5NmFiN2EyQFmPpA==: --dhchap-ctrl-secret DHHC-1:03:Njg1NWJiYmZhMjkxMzRjNWM1NDE2OGNlYjc4YjgzMzRiMDVkZmJhMWQ0OGZlMjAyZjNhZDExNWY1YWQzNWE3Ntyf5n4=: 00:09:38.541 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:38.542 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:38.542 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:09:38.542 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.542 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:38.542 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.542 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:38.542 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:38.542 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:38.542 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:09:38.542 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:38.542 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:38.542 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:09:38.542 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:09:38.542 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:38.542 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:38.542 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.542 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:38.542 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.542 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:38.542 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:38.542 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:38.542 00:09:38.802 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:38.802 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:38.802 13:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:38.802 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:38.802 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:38.802 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.802 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:38.802 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.802 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:38.802 { 00:09:38.802 "cntlid": 3, 00:09:38.802 "qid": 0, 00:09:38.802 "state": "enabled", 00:09:38.802 "thread": "nvmf_tgt_poll_group_000", 00:09:38.802 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4", 00:09:38.802 "listen_address": { 00:09:38.802 "trtype": "TCP", 00:09:38.802 "adrfam": "IPv4", 00:09:38.802 "traddr": "10.0.0.3", 00:09:38.802 "trsvcid": "4420" 00:09:38.802 }, 00:09:38.802 "peer_address": { 00:09:38.802 "trtype": "TCP", 00:09:38.802 "adrfam": "IPv4", 00:09:38.802 "traddr": "10.0.0.1", 00:09:38.802 "trsvcid": "56028" 00:09:38.802 }, 00:09:38.802 "auth": { 00:09:38.802 "state": "completed", 00:09:38.802 "digest": "sha256", 00:09:38.802 "dhgroup": "null" 00:09:38.802 } 00:09:38.802 } 00:09:38.802 ]' 00:09:38.802 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:39.061 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:39.061 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:39.061 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:09:39.061 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:39.061 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:39.061 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:39.061 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:39.320 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjQ2ZDlmZjJmZTk0NGE4YmY4YzZmOTMxYzlmMmI0Y2aDcmP9: --dhchap-ctrl-secret DHHC-1:02:MmQxNTRiNGRjZDkwOWI5YmQ4MmZjZjhiMWUwOTc2ZjE0ZmEwMzc0NTgyZGVkYzdiP7GVWQ==: 00:09:39.320 13:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --hostid cfa2def7-c8af-457f-82a0-b312efdea7f4 -l 0 --dhchap-secret DHHC-1:01:ZjQ2ZDlmZjJmZTk0NGE4YmY4YzZmOTMxYzlmMmI0Y2aDcmP9: --dhchap-ctrl-secret DHHC-1:02:MmQxNTRiNGRjZDkwOWI5YmQ4MmZjZjhiMWUwOTc2ZjE0ZmEwMzc0NTgyZGVkYzdiP7GVWQ==: 00:09:39.889 13:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:39.889 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:39.889 13:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:09:39.889 13:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.889 13:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:39.889 13:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.889 13:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:39.889 13:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:39.889 13:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:40.149 13:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:09:40.149 13:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:40.149 13:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:40.149 13:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:09:40.149 13:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:09:40.149 13:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:40.149 13:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:40.149 13:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.149 13:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:40.149 13:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.149 13:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:40.149 13:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:40.149 13:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:40.717 00:09:40.717 13:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:40.717 13:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:40.717 13:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:40.977 13:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:40.977 13:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:40.977 13:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.977 13:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:40.977 13:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.977 13:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:40.977 { 00:09:40.977 "cntlid": 5, 00:09:40.977 "qid": 0, 00:09:40.977 "state": "enabled", 00:09:40.977 "thread": "nvmf_tgt_poll_group_000", 00:09:40.977 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4", 00:09:40.977 "listen_address": { 00:09:40.977 "trtype": "TCP", 00:09:40.977 "adrfam": "IPv4", 00:09:40.977 "traddr": "10.0.0.3", 00:09:40.977 "trsvcid": "4420" 00:09:40.977 }, 00:09:40.977 "peer_address": { 00:09:40.977 "trtype": "TCP", 00:09:40.977 "adrfam": "IPv4", 00:09:40.977 "traddr": "10.0.0.1", 00:09:40.977 "trsvcid": "39756" 00:09:40.977 }, 00:09:40.977 "auth": { 00:09:40.977 "state": "completed", 00:09:40.977 "digest": "sha256", 00:09:40.977 "dhgroup": "null" 00:09:40.977 } 00:09:40.977 } 00:09:40.977 ]' 00:09:40.977 13:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:40.977 13:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:40.977 13:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:40.977 13:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:09:40.977 13:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:40.977 13:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:40.977 13:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:40.977 13:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:41.235 13:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDY0MDZkOGNlMjc4MWQ4ZjYyNTAyOTNjNWUzNGExOTQ2NjI0OGE3ZjZlNGQxOWM2YULh6g==: --dhchap-ctrl-secret DHHC-1:01:NzY5OTJjM2UzNDQ0NjQ2NzEyYjk2ZGExZTlkZTZkM2NLb/z4: 00:09:41.235 13:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --hostid cfa2def7-c8af-457f-82a0-b312efdea7f4 -l 0 --dhchap-secret DHHC-1:02:ZDY0MDZkOGNlMjc4MWQ4ZjYyNTAyOTNjNWUzNGExOTQ2NjI0OGE3ZjZlNGQxOWM2YULh6g==: --dhchap-ctrl-secret DHHC-1:01:NzY5OTJjM2UzNDQ0NjQ2NzEyYjk2ZGExZTlkZTZkM2NLb/z4: 00:09:41.801 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:41.801 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:41.801 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:09:41.801 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.801 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:41.801 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.801 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:41.802 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:41.802 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:42.061 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:09:42.061 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:42.061 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:42.061 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:09:42.061 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:09:42.061 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:42.061 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --dhchap-key key3 00:09:42.061 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.061 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:42.061 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.061 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:09:42.061 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:42.061 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:42.320 00:09:42.320 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:42.320 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:42.320 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:42.578 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:42.578 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:42.578 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.578 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:42.578 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.578 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:42.578 { 00:09:42.578 "cntlid": 7, 00:09:42.578 "qid": 0, 00:09:42.578 "state": "enabled", 00:09:42.578 "thread": "nvmf_tgt_poll_group_000", 00:09:42.578 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4", 00:09:42.578 "listen_address": { 00:09:42.578 "trtype": "TCP", 00:09:42.578 "adrfam": "IPv4", 00:09:42.579 "traddr": "10.0.0.3", 00:09:42.579 "trsvcid": "4420" 00:09:42.579 }, 00:09:42.579 "peer_address": { 00:09:42.579 "trtype": "TCP", 00:09:42.579 "adrfam": "IPv4", 00:09:42.579 "traddr": "10.0.0.1", 00:09:42.579 "trsvcid": "39786" 00:09:42.579 }, 00:09:42.579 "auth": { 00:09:42.579 "state": "completed", 00:09:42.579 "digest": "sha256", 00:09:42.579 "dhgroup": "null" 00:09:42.579 } 00:09:42.579 } 00:09:42.579 ]' 00:09:42.579 13:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:42.837 13:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:42.837 13:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:42.837 13:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:09:42.837 13:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:42.837 13:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:42.837 13:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:42.837 13:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:43.097 13:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2ZkZjI0Njc3NWQ4YTk5NDkyZGZlZDM1NjIzMDBkY2ZmMzQ2ZTQxZmIxYjQ3NzdiZjY3YTFhNjYxMGQ4YjQ0Y2MZpOQ=: 00:09:43.097 13:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --hostid cfa2def7-c8af-457f-82a0-b312efdea7f4 -l 0 --dhchap-secret DHHC-1:03:M2ZkZjI0Njc3NWQ4YTk5NDkyZGZlZDM1NjIzMDBkY2ZmMzQ2ZTQxZmIxYjQ3NzdiZjY3YTFhNjYxMGQ4YjQ0Y2MZpOQ=: 00:09:44.034 13:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:44.034 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:44.034 13:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:09:44.034 13:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.034 13:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:44.034 13:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.034 13:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:09:44.034 13:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:44.034 13:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:44.034 13:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:44.034 13:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:09:44.034 13:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:44.034 13:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:44.034 13:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:09:44.034 13:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:09:44.034 13:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:44.034 13:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:44.034 13:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.034 13:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:44.034 13:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.034 13:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:44.034 13:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:44.034 13:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:44.602 00:09:44.602 13:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:44.602 13:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:44.602 13:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:44.860 13:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:44.860 13:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:44.860 13:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.860 13:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:44.860 13:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.860 13:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:44.860 { 00:09:44.860 "cntlid": 9, 00:09:44.860 "qid": 0, 00:09:44.860 "state": "enabled", 00:09:44.860 "thread": "nvmf_tgt_poll_group_000", 00:09:44.860 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4", 00:09:44.860 "listen_address": { 00:09:44.860 "trtype": "TCP", 00:09:44.860 "adrfam": "IPv4", 00:09:44.860 "traddr": "10.0.0.3", 00:09:44.860 "trsvcid": "4420" 00:09:44.860 }, 00:09:44.860 "peer_address": { 00:09:44.860 "trtype": "TCP", 00:09:44.860 "adrfam": "IPv4", 00:09:44.860 "traddr": "10.0.0.1", 00:09:44.860 "trsvcid": "39822" 00:09:44.860 }, 00:09:44.860 "auth": { 00:09:44.860 "state": "completed", 00:09:44.860 "digest": "sha256", 00:09:44.860 "dhgroup": "ffdhe2048" 00:09:44.860 } 00:09:44.860 } 00:09:44.860 ]' 00:09:44.860 13:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:44.860 13:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:44.860 13:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:44.860 13:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:09:44.860 13:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:44.860 13:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:44.860 13:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:44.860 13:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:45.119 13:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2FmN2U1MTUzMjliYzBmZTU3M2ZjNjcwMDhjODg0OTdkMjI3NDkzZGE5NmFiN2EyQFmPpA==: --dhchap-ctrl-secret DHHC-1:03:Njg1NWJiYmZhMjkxMzRjNWM1NDE2OGNlYjc4YjgzMzRiMDVkZmJhMWQ0OGZlMjAyZjNhZDExNWY1YWQzNWE3Ntyf5n4=: 00:09:45.119 13:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --hostid cfa2def7-c8af-457f-82a0-b312efdea7f4 -l 0 --dhchap-secret DHHC-1:00:N2FmN2U1MTUzMjliYzBmZTU3M2ZjNjcwMDhjODg0OTdkMjI3NDkzZGE5NmFiN2EyQFmPpA==: --dhchap-ctrl-secret DHHC-1:03:Njg1NWJiYmZhMjkxMzRjNWM1NDE2OGNlYjc4YjgzMzRiMDVkZmJhMWQ0OGZlMjAyZjNhZDExNWY1YWQzNWE3Ntyf5n4=: 00:09:45.686 13:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:45.686 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:45.686 13:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:09:45.686 13:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.686 13:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:45.944 13:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.944 13:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:45.944 13:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:45.944 13:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:46.203 13:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:09:46.203 13:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:46.203 13:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:46.203 13:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:09:46.203 13:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:09:46.203 13:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:46.203 13:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:46.203 13:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.203 13:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:46.203 13:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.203 13:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:46.203 13:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:46.203 13:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:46.462 00:09:46.462 13:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:46.462 13:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:46.462 13:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:46.721 13:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:46.721 13:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:46.721 13:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.721 13:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:46.721 13:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.722 13:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:46.722 { 00:09:46.722 "cntlid": 11, 00:09:46.722 "qid": 0, 00:09:46.722 "state": "enabled", 00:09:46.722 "thread": "nvmf_tgt_poll_group_000", 00:09:46.722 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4", 00:09:46.722 "listen_address": { 00:09:46.722 "trtype": "TCP", 00:09:46.722 "adrfam": "IPv4", 00:09:46.722 "traddr": "10.0.0.3", 00:09:46.722 "trsvcid": "4420" 00:09:46.722 }, 00:09:46.722 "peer_address": { 00:09:46.722 "trtype": "TCP", 00:09:46.722 "adrfam": "IPv4", 00:09:46.722 "traddr": "10.0.0.1", 00:09:46.722 "trsvcid": "39846" 00:09:46.722 }, 00:09:46.722 "auth": { 00:09:46.722 "state": "completed", 00:09:46.722 "digest": "sha256", 00:09:46.722 "dhgroup": "ffdhe2048" 00:09:46.722 } 00:09:46.722 } 00:09:46.722 ]' 00:09:46.722 13:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:46.722 13:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:46.722 13:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:46.980 13:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:09:46.980 13:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:46.980 13:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:46.980 13:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:46.980 13:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:47.238 13:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjQ2ZDlmZjJmZTk0NGE4YmY4YzZmOTMxYzlmMmI0Y2aDcmP9: --dhchap-ctrl-secret DHHC-1:02:MmQxNTRiNGRjZDkwOWI5YmQ4MmZjZjhiMWUwOTc2ZjE0ZmEwMzc0NTgyZGVkYzdiP7GVWQ==: 00:09:47.238 13:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --hostid cfa2def7-c8af-457f-82a0-b312efdea7f4 -l 0 --dhchap-secret DHHC-1:01:ZjQ2ZDlmZjJmZTk0NGE4YmY4YzZmOTMxYzlmMmI0Y2aDcmP9: --dhchap-ctrl-secret DHHC-1:02:MmQxNTRiNGRjZDkwOWI5YmQ4MmZjZjhiMWUwOTc2ZjE0ZmEwMzc0NTgyZGVkYzdiP7GVWQ==: 00:09:47.805 13:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:47.805 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:47.805 13:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:09:47.805 13:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.805 13:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:47.805 13:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.805 13:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:47.805 13:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:47.805 13:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:48.076 13:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:09:48.076 13:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:48.076 13:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:48.076 13:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:09:48.076 13:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:09:48.076 13:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:48.076 13:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:48.076 13:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.076 13:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:48.076 13:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.076 13:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:48.076 13:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:48.076 13:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:48.646 00:09:48.646 13:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:48.646 13:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:48.646 13:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:48.918 13:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:48.918 13:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:48.918 13:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.918 13:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:48.918 13:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.918 13:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:48.918 { 00:09:48.918 "cntlid": 13, 00:09:48.918 "qid": 0, 00:09:48.918 "state": "enabled", 00:09:48.918 "thread": "nvmf_tgt_poll_group_000", 00:09:48.918 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4", 00:09:48.918 "listen_address": { 00:09:48.918 "trtype": "TCP", 00:09:48.918 "adrfam": "IPv4", 00:09:48.918 "traddr": "10.0.0.3", 00:09:48.918 "trsvcid": "4420" 00:09:48.918 }, 00:09:48.918 "peer_address": { 00:09:48.918 "trtype": "TCP", 00:09:48.918 "adrfam": "IPv4", 00:09:48.918 "traddr": "10.0.0.1", 00:09:48.918 "trsvcid": "39860" 00:09:48.918 }, 00:09:48.918 "auth": { 00:09:48.918 "state": "completed", 00:09:48.918 "digest": "sha256", 00:09:48.918 "dhgroup": "ffdhe2048" 00:09:48.918 } 00:09:48.918 } 00:09:48.918 ]' 00:09:48.918 13:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:48.918 13:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:48.918 13:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:48.918 13:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:09:48.918 13:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:48.918 13:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:48.918 13:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:48.918 13:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:49.178 13:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDY0MDZkOGNlMjc4MWQ4ZjYyNTAyOTNjNWUzNGExOTQ2NjI0OGE3ZjZlNGQxOWM2YULh6g==: --dhchap-ctrl-secret DHHC-1:01:NzY5OTJjM2UzNDQ0NjQ2NzEyYjk2ZGExZTlkZTZkM2NLb/z4: 00:09:49.178 13:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --hostid cfa2def7-c8af-457f-82a0-b312efdea7f4 -l 0 --dhchap-secret DHHC-1:02:ZDY0MDZkOGNlMjc4MWQ4ZjYyNTAyOTNjNWUzNGExOTQ2NjI0OGE3ZjZlNGQxOWM2YULh6g==: --dhchap-ctrl-secret DHHC-1:01:NzY5OTJjM2UzNDQ0NjQ2NzEyYjk2ZGExZTlkZTZkM2NLb/z4: 00:09:49.746 13:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:50.005 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:50.005 13:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:09:50.005 13:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.005 13:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:50.005 13:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.005 13:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:50.005 13:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:50.005 13:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:50.005 13:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:09:50.005 13:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:50.005 13:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:50.005 13:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:09:50.005 13:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:09:50.005 13:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:50.005 13:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --dhchap-key key3 00:09:50.005 13:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.005 13:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:50.005 13:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.005 13:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:09:50.005 13:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:50.005 13:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:50.573 00:09:50.573 13:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:50.573 13:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:50.573 13:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:50.832 13:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:50.832 13:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:50.832 13:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.832 13:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:50.832 13:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.832 13:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:50.832 { 00:09:50.832 "cntlid": 15, 00:09:50.832 "qid": 0, 00:09:50.832 "state": "enabled", 00:09:50.832 "thread": "nvmf_tgt_poll_group_000", 00:09:50.832 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4", 00:09:50.832 "listen_address": { 00:09:50.832 "trtype": "TCP", 00:09:50.832 "adrfam": "IPv4", 00:09:50.832 "traddr": "10.0.0.3", 00:09:50.832 "trsvcid": "4420" 00:09:50.832 }, 00:09:50.832 "peer_address": { 00:09:50.832 "trtype": "TCP", 00:09:50.832 "adrfam": "IPv4", 00:09:50.832 "traddr": "10.0.0.1", 00:09:50.832 "trsvcid": "56626" 00:09:50.832 }, 00:09:50.832 "auth": { 00:09:50.832 "state": "completed", 00:09:50.832 "digest": "sha256", 00:09:50.832 "dhgroup": "ffdhe2048" 00:09:50.832 } 00:09:50.832 } 00:09:50.832 ]' 00:09:50.832 13:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:50.832 13:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:50.832 13:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:50.832 13:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:09:50.832 13:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:50.832 13:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:50.832 13:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:50.832 13:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:51.092 13:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2ZkZjI0Njc3NWQ4YTk5NDkyZGZlZDM1NjIzMDBkY2ZmMzQ2ZTQxZmIxYjQ3NzdiZjY3YTFhNjYxMGQ4YjQ0Y2MZpOQ=: 00:09:51.092 13:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --hostid cfa2def7-c8af-457f-82a0-b312efdea7f4 -l 0 --dhchap-secret DHHC-1:03:M2ZkZjI0Njc3NWQ4YTk5NDkyZGZlZDM1NjIzMDBkY2ZmMzQ2ZTQxZmIxYjQ3NzdiZjY3YTFhNjYxMGQ4YjQ0Y2MZpOQ=: 00:09:51.660 13:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:51.660 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:51.660 13:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:09:51.660 13:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.660 13:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:51.660 13:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.661 13:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:09:51.661 13:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:51.661 13:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:09:51.661 13:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:09:51.919 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:09:51.919 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:51.919 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:51.919 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:09:51.919 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:09:51.919 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:51.919 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:51.920 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.920 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:51.920 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.920 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:51.920 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:51.920 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:52.178 00:09:52.438 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:52.438 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:52.438 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:52.697 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:52.697 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:52.697 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.697 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:52.697 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.697 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:52.697 { 00:09:52.697 "cntlid": 17, 00:09:52.697 "qid": 0, 00:09:52.697 "state": "enabled", 00:09:52.697 "thread": "nvmf_tgt_poll_group_000", 00:09:52.697 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4", 00:09:52.697 "listen_address": { 00:09:52.697 "trtype": "TCP", 00:09:52.697 "adrfam": "IPv4", 00:09:52.697 "traddr": "10.0.0.3", 00:09:52.697 "trsvcid": "4420" 00:09:52.697 }, 00:09:52.697 "peer_address": { 00:09:52.697 "trtype": "TCP", 00:09:52.697 "adrfam": "IPv4", 00:09:52.697 "traddr": "10.0.0.1", 00:09:52.697 "trsvcid": "56658" 00:09:52.697 }, 00:09:52.697 "auth": { 00:09:52.697 "state": "completed", 00:09:52.697 "digest": "sha256", 00:09:52.697 "dhgroup": "ffdhe3072" 00:09:52.697 } 00:09:52.697 } 00:09:52.697 ]' 00:09:52.697 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:52.697 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:52.697 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:52.697 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:09:52.697 13:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:52.697 13:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:52.697 13:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:52.697 13:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:52.956 13:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2FmN2U1MTUzMjliYzBmZTU3M2ZjNjcwMDhjODg0OTdkMjI3NDkzZGE5NmFiN2EyQFmPpA==: --dhchap-ctrl-secret DHHC-1:03:Njg1NWJiYmZhMjkxMzRjNWM1NDE2OGNlYjc4YjgzMzRiMDVkZmJhMWQ0OGZlMjAyZjNhZDExNWY1YWQzNWE3Ntyf5n4=: 00:09:52.956 13:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --hostid cfa2def7-c8af-457f-82a0-b312efdea7f4 -l 0 --dhchap-secret DHHC-1:00:N2FmN2U1MTUzMjliYzBmZTU3M2ZjNjcwMDhjODg0OTdkMjI3NDkzZGE5NmFiN2EyQFmPpA==: --dhchap-ctrl-secret DHHC-1:03:Njg1NWJiYmZhMjkxMzRjNWM1NDE2OGNlYjc4YjgzMzRiMDVkZmJhMWQ0OGZlMjAyZjNhZDExNWY1YWQzNWE3Ntyf5n4=: 00:09:53.895 13:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:53.895 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:53.895 13:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:09:53.895 13:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.895 13:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:53.895 13:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.895 13:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:53.895 13:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:09:53.895 13:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:09:53.895 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:09:53.895 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:53.895 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:53.895 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:09:53.895 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:09:53.895 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:53.895 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:53.895 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.895 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:53.895 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.895 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:53.895 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:53.895 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:54.465 00:09:54.465 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:54.465 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:54.465 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:54.724 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:54.724 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:54.724 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.724 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:54.724 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.724 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:54.724 { 00:09:54.724 "cntlid": 19, 00:09:54.724 "qid": 0, 00:09:54.724 "state": "enabled", 00:09:54.724 "thread": "nvmf_tgt_poll_group_000", 00:09:54.724 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4", 00:09:54.724 "listen_address": { 00:09:54.724 "trtype": "TCP", 00:09:54.724 "adrfam": "IPv4", 00:09:54.724 "traddr": "10.0.0.3", 00:09:54.724 "trsvcid": "4420" 00:09:54.724 }, 00:09:54.724 "peer_address": { 00:09:54.724 "trtype": "TCP", 00:09:54.724 "adrfam": "IPv4", 00:09:54.724 "traddr": "10.0.0.1", 00:09:54.724 "trsvcid": "56678" 00:09:54.724 }, 00:09:54.724 "auth": { 00:09:54.724 "state": "completed", 00:09:54.724 "digest": "sha256", 00:09:54.724 "dhgroup": "ffdhe3072" 00:09:54.724 } 00:09:54.724 } 00:09:54.724 ]' 00:09:54.724 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:54.724 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:54.724 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:54.724 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:09:54.724 13:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:54.724 13:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:54.725 13:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:54.725 13:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:54.984 13:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjQ2ZDlmZjJmZTk0NGE4YmY4YzZmOTMxYzlmMmI0Y2aDcmP9: --dhchap-ctrl-secret DHHC-1:02:MmQxNTRiNGRjZDkwOWI5YmQ4MmZjZjhiMWUwOTc2ZjE0ZmEwMzc0NTgyZGVkYzdiP7GVWQ==: 00:09:54.984 13:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --hostid cfa2def7-c8af-457f-82a0-b312efdea7f4 -l 0 --dhchap-secret DHHC-1:01:ZjQ2ZDlmZjJmZTk0NGE4YmY4YzZmOTMxYzlmMmI0Y2aDcmP9: --dhchap-ctrl-secret DHHC-1:02:MmQxNTRiNGRjZDkwOWI5YmQ4MmZjZjhiMWUwOTc2ZjE0ZmEwMzc0NTgyZGVkYzdiP7GVWQ==: 00:09:55.553 13:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:55.553 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:55.553 13:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:09:55.553 13:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.553 13:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:55.553 13:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.553 13:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:55.553 13:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:09:55.553 13:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:09:55.814 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:09:55.814 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:55.814 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:55.814 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:09:55.814 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:09:55.814 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:55.814 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:55.814 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.814 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:55.814 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.814 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:55.814 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:55.814 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:56.079 00:09:56.079 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:56.079 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:56.079 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:56.337 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:56.337 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:56.337 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.337 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:56.337 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.337 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:56.337 { 00:09:56.337 "cntlid": 21, 00:09:56.337 "qid": 0, 00:09:56.337 "state": "enabled", 00:09:56.337 "thread": "nvmf_tgt_poll_group_000", 00:09:56.337 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4", 00:09:56.337 "listen_address": { 00:09:56.337 "trtype": "TCP", 00:09:56.337 "adrfam": "IPv4", 00:09:56.337 "traddr": "10.0.0.3", 00:09:56.337 "trsvcid": "4420" 00:09:56.337 }, 00:09:56.337 "peer_address": { 00:09:56.337 "trtype": "TCP", 00:09:56.337 "adrfam": "IPv4", 00:09:56.337 "traddr": "10.0.0.1", 00:09:56.337 "trsvcid": "56694" 00:09:56.337 }, 00:09:56.337 "auth": { 00:09:56.337 "state": "completed", 00:09:56.337 "digest": "sha256", 00:09:56.337 "dhgroup": "ffdhe3072" 00:09:56.337 } 00:09:56.337 } 00:09:56.337 ]' 00:09:56.337 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:56.596 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:56.596 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:56.596 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:09:56.596 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:56.596 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:56.596 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:56.596 13:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:56.854 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDY0MDZkOGNlMjc4MWQ4ZjYyNTAyOTNjNWUzNGExOTQ2NjI0OGE3ZjZlNGQxOWM2YULh6g==: --dhchap-ctrl-secret DHHC-1:01:NzY5OTJjM2UzNDQ0NjQ2NzEyYjk2ZGExZTlkZTZkM2NLb/z4: 00:09:56.854 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --hostid cfa2def7-c8af-457f-82a0-b312efdea7f4 -l 0 --dhchap-secret DHHC-1:02:ZDY0MDZkOGNlMjc4MWQ4ZjYyNTAyOTNjNWUzNGExOTQ2NjI0OGE3ZjZlNGQxOWM2YULh6g==: --dhchap-ctrl-secret DHHC-1:01:NzY5OTJjM2UzNDQ0NjQ2NzEyYjk2ZGExZTlkZTZkM2NLb/z4: 00:09:57.422 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:57.422 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:57.422 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:09:57.422 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.422 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:57.422 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.422 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:57.422 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:09:57.422 13:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:09:57.680 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:09:57.680 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:57.680 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:57.680 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:09:57.680 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:09:57.680 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:57.680 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --dhchap-key key3 00:09:57.680 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.680 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:57.680 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.680 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:09:57.680 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:57.680 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:58.249 00:09:58.249 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:58.249 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:58.249 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:58.508 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:58.508 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:58.508 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.508 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:58.508 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.508 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:58.508 { 00:09:58.508 "cntlid": 23, 00:09:58.508 "qid": 0, 00:09:58.508 "state": "enabled", 00:09:58.508 "thread": "nvmf_tgt_poll_group_000", 00:09:58.508 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4", 00:09:58.508 "listen_address": { 00:09:58.508 "trtype": "TCP", 00:09:58.508 "adrfam": "IPv4", 00:09:58.508 "traddr": "10.0.0.3", 00:09:58.508 "trsvcid": "4420" 00:09:58.508 }, 00:09:58.508 "peer_address": { 00:09:58.508 "trtype": "TCP", 00:09:58.508 "adrfam": "IPv4", 00:09:58.508 "traddr": "10.0.0.1", 00:09:58.508 "trsvcid": "56726" 00:09:58.508 }, 00:09:58.508 "auth": { 00:09:58.508 "state": "completed", 00:09:58.508 "digest": "sha256", 00:09:58.508 "dhgroup": "ffdhe3072" 00:09:58.508 } 00:09:58.508 } 00:09:58.508 ]' 00:09:58.508 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:58.508 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:58.508 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:58.508 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:09:58.508 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:58.508 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:58.508 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:58.508 13:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:58.767 13:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2ZkZjI0Njc3NWQ4YTk5NDkyZGZlZDM1NjIzMDBkY2ZmMzQ2ZTQxZmIxYjQ3NzdiZjY3YTFhNjYxMGQ4YjQ0Y2MZpOQ=: 00:09:58.767 13:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --hostid cfa2def7-c8af-457f-82a0-b312efdea7f4 -l 0 --dhchap-secret DHHC-1:03:M2ZkZjI0Njc3NWQ4YTk5NDkyZGZlZDM1NjIzMDBkY2ZmMzQ2ZTQxZmIxYjQ3NzdiZjY3YTFhNjYxMGQ4YjQ0Y2MZpOQ=: 00:09:59.336 13:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:59.336 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:59.336 13:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:09:59.336 13:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.336 13:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:59.336 13:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.336 13:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:09:59.336 13:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:59.336 13:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:09:59.336 13:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:09:59.906 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:09:59.906 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:59.906 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:59.906 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:09:59.906 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:09:59.906 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:59.906 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:59.906 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.906 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:59.906 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.906 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:59.906 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:59.906 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:00.165 00:10:00.165 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:00.165 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:00.165 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:00.425 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:00.425 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:00.425 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.425 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:00.425 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.425 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:00.425 { 00:10:00.425 "cntlid": 25, 00:10:00.425 "qid": 0, 00:10:00.425 "state": "enabled", 00:10:00.425 "thread": "nvmf_tgt_poll_group_000", 00:10:00.425 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4", 00:10:00.425 "listen_address": { 00:10:00.425 "trtype": "TCP", 00:10:00.425 "adrfam": "IPv4", 00:10:00.425 "traddr": "10.0.0.3", 00:10:00.425 "trsvcid": "4420" 00:10:00.425 }, 00:10:00.425 "peer_address": { 00:10:00.425 "trtype": "TCP", 00:10:00.425 "adrfam": "IPv4", 00:10:00.425 "traddr": "10.0.0.1", 00:10:00.425 "trsvcid": "55978" 00:10:00.425 }, 00:10:00.425 "auth": { 00:10:00.425 "state": "completed", 00:10:00.425 "digest": "sha256", 00:10:00.425 "dhgroup": "ffdhe4096" 00:10:00.425 } 00:10:00.425 } 00:10:00.425 ]' 00:10:00.425 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:00.425 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:00.425 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:00.425 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:00.425 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:00.425 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:00.425 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:00.425 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:00.684 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2FmN2U1MTUzMjliYzBmZTU3M2ZjNjcwMDhjODg0OTdkMjI3NDkzZGE5NmFiN2EyQFmPpA==: --dhchap-ctrl-secret DHHC-1:03:Njg1NWJiYmZhMjkxMzRjNWM1NDE2OGNlYjc4YjgzMzRiMDVkZmJhMWQ0OGZlMjAyZjNhZDExNWY1YWQzNWE3Ntyf5n4=: 00:10:00.684 13:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --hostid cfa2def7-c8af-457f-82a0-b312efdea7f4 -l 0 --dhchap-secret DHHC-1:00:N2FmN2U1MTUzMjliYzBmZTU3M2ZjNjcwMDhjODg0OTdkMjI3NDkzZGE5NmFiN2EyQFmPpA==: --dhchap-ctrl-secret DHHC-1:03:Njg1NWJiYmZhMjkxMzRjNWM1NDE2OGNlYjc4YjgzMzRiMDVkZmJhMWQ0OGZlMjAyZjNhZDExNWY1YWQzNWE3Ntyf5n4=: 00:10:01.254 13:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:01.254 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:01.254 13:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:10:01.254 13:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.254 13:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:01.254 13:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.254 13:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:01.254 13:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:01.254 13:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:01.514 13:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:10:01.514 13:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:01.514 13:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:01.514 13:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:01.514 13:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:01.514 13:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:01.514 13:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:01.514 13:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.514 13:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:01.514 13:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.514 13:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:01.514 13:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:01.514 13:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:02.149 00:10:02.149 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:02.149 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:02.149 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:02.408 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:02.408 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:02.408 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.408 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:02.408 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.408 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:02.408 { 00:10:02.408 "cntlid": 27, 00:10:02.408 "qid": 0, 00:10:02.408 "state": "enabled", 00:10:02.408 "thread": "nvmf_tgt_poll_group_000", 00:10:02.408 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4", 00:10:02.408 "listen_address": { 00:10:02.408 "trtype": "TCP", 00:10:02.408 "adrfam": "IPv4", 00:10:02.408 "traddr": "10.0.0.3", 00:10:02.408 "trsvcid": "4420" 00:10:02.408 }, 00:10:02.408 "peer_address": { 00:10:02.408 "trtype": "TCP", 00:10:02.408 "adrfam": "IPv4", 00:10:02.408 "traddr": "10.0.0.1", 00:10:02.408 "trsvcid": "56012" 00:10:02.408 }, 00:10:02.408 "auth": { 00:10:02.408 "state": "completed", 00:10:02.408 "digest": "sha256", 00:10:02.408 "dhgroup": "ffdhe4096" 00:10:02.408 } 00:10:02.408 } 00:10:02.408 ]' 00:10:02.408 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:02.408 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:02.408 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:02.408 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:02.408 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:02.408 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:02.408 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:02.408 13:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:02.667 13:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjQ2ZDlmZjJmZTk0NGE4YmY4YzZmOTMxYzlmMmI0Y2aDcmP9: --dhchap-ctrl-secret DHHC-1:02:MmQxNTRiNGRjZDkwOWI5YmQ4MmZjZjhiMWUwOTc2ZjE0ZmEwMzc0NTgyZGVkYzdiP7GVWQ==: 00:10:02.667 13:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --hostid cfa2def7-c8af-457f-82a0-b312efdea7f4 -l 0 --dhchap-secret DHHC-1:01:ZjQ2ZDlmZjJmZTk0NGE4YmY4YzZmOTMxYzlmMmI0Y2aDcmP9: --dhchap-ctrl-secret DHHC-1:02:MmQxNTRiNGRjZDkwOWI5YmQ4MmZjZjhiMWUwOTc2ZjE0ZmEwMzc0NTgyZGVkYzdiP7GVWQ==: 00:10:03.604 13:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:03.604 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:03.604 13:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:10:03.604 13:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.604 13:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:03.604 13:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.604 13:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:03.604 13:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:03.605 13:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:03.605 13:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:10:03.605 13:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:03.605 13:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:03.605 13:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:03.605 13:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:03.605 13:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:03.605 13:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:03.605 13:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.605 13:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:03.605 13:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.605 13:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:03.605 13:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:03.605 13:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:04.173 00:10:04.173 13:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:04.173 13:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:04.173 13:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:04.433 13:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:04.433 13:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:04.433 13:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.433 13:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:04.433 13:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.433 13:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:04.433 { 00:10:04.433 "cntlid": 29, 00:10:04.433 "qid": 0, 00:10:04.433 "state": "enabled", 00:10:04.433 "thread": "nvmf_tgt_poll_group_000", 00:10:04.433 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4", 00:10:04.433 "listen_address": { 00:10:04.433 "trtype": "TCP", 00:10:04.433 "adrfam": "IPv4", 00:10:04.433 "traddr": "10.0.0.3", 00:10:04.433 "trsvcid": "4420" 00:10:04.433 }, 00:10:04.433 "peer_address": { 00:10:04.433 "trtype": "TCP", 00:10:04.433 "adrfam": "IPv4", 00:10:04.433 "traddr": "10.0.0.1", 00:10:04.433 "trsvcid": "56048" 00:10:04.433 }, 00:10:04.433 "auth": { 00:10:04.433 "state": "completed", 00:10:04.433 "digest": "sha256", 00:10:04.433 "dhgroup": "ffdhe4096" 00:10:04.433 } 00:10:04.433 } 00:10:04.433 ]' 00:10:04.433 13:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:04.433 13:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:04.433 13:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:04.433 13:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:04.433 13:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:04.433 13:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:04.433 13:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:04.433 13:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:04.693 13:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDY0MDZkOGNlMjc4MWQ4ZjYyNTAyOTNjNWUzNGExOTQ2NjI0OGE3ZjZlNGQxOWM2YULh6g==: --dhchap-ctrl-secret DHHC-1:01:NzY5OTJjM2UzNDQ0NjQ2NzEyYjk2ZGExZTlkZTZkM2NLb/z4: 00:10:04.693 13:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --hostid cfa2def7-c8af-457f-82a0-b312efdea7f4 -l 0 --dhchap-secret DHHC-1:02:ZDY0MDZkOGNlMjc4MWQ4ZjYyNTAyOTNjNWUzNGExOTQ2NjI0OGE3ZjZlNGQxOWM2YULh6g==: --dhchap-ctrl-secret DHHC-1:01:NzY5OTJjM2UzNDQ0NjQ2NzEyYjk2ZGExZTlkZTZkM2NLb/z4: 00:10:05.262 13:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:05.262 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:05.262 13:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:10:05.262 13:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.262 13:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:05.262 13:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.262 13:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:05.262 13:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:05.262 13:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:05.521 13:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:10:05.521 13:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:05.521 13:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:05.521 13:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:05.521 13:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:05.521 13:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:05.521 13:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --dhchap-key key3 00:10:05.521 13:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.521 13:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:05.521 13:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.521 13:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:05.521 13:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:05.521 13:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:06.088 00:10:06.088 13:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:06.088 13:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:06.088 13:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:06.346 13:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:06.346 13:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:06.346 13:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.346 13:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:06.346 13:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.346 13:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:06.346 { 00:10:06.346 "cntlid": 31, 00:10:06.346 "qid": 0, 00:10:06.346 "state": "enabled", 00:10:06.346 "thread": "nvmf_tgt_poll_group_000", 00:10:06.346 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4", 00:10:06.346 "listen_address": { 00:10:06.346 "trtype": "TCP", 00:10:06.346 "adrfam": "IPv4", 00:10:06.346 "traddr": "10.0.0.3", 00:10:06.346 "trsvcid": "4420" 00:10:06.346 }, 00:10:06.346 "peer_address": { 00:10:06.346 "trtype": "TCP", 00:10:06.346 "adrfam": "IPv4", 00:10:06.346 "traddr": "10.0.0.1", 00:10:06.346 "trsvcid": "56070" 00:10:06.346 }, 00:10:06.346 "auth": { 00:10:06.346 "state": "completed", 00:10:06.346 "digest": "sha256", 00:10:06.346 "dhgroup": "ffdhe4096" 00:10:06.346 } 00:10:06.346 } 00:10:06.346 ]' 00:10:06.346 13:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:06.346 13:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:06.346 13:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:06.346 13:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:06.346 13:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:06.346 13:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:06.346 13:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:06.346 13:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:06.605 13:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2ZkZjI0Njc3NWQ4YTk5NDkyZGZlZDM1NjIzMDBkY2ZmMzQ2ZTQxZmIxYjQ3NzdiZjY3YTFhNjYxMGQ4YjQ0Y2MZpOQ=: 00:10:06.605 13:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --hostid cfa2def7-c8af-457f-82a0-b312efdea7f4 -l 0 --dhchap-secret DHHC-1:03:M2ZkZjI0Njc3NWQ4YTk5NDkyZGZlZDM1NjIzMDBkY2ZmMzQ2ZTQxZmIxYjQ3NzdiZjY3YTFhNjYxMGQ4YjQ0Y2MZpOQ=: 00:10:07.541 13:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:07.541 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:07.541 13:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:10:07.541 13:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.541 13:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:07.541 13:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.541 13:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:07.541 13:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:07.541 13:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:07.541 13:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:07.541 13:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:10:07.541 13:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:07.541 13:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:07.541 13:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:07.541 13:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:07.542 13:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:07.542 13:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:07.542 13:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.542 13:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:07.542 13:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.542 13:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:07.542 13:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:07.542 13:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:08.108 00:10:08.108 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:08.108 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:08.108 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:08.366 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:08.366 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:08.366 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.366 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:08.366 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.366 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:08.366 { 00:10:08.366 "cntlid": 33, 00:10:08.366 "qid": 0, 00:10:08.366 "state": "enabled", 00:10:08.366 "thread": "nvmf_tgt_poll_group_000", 00:10:08.366 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4", 00:10:08.366 "listen_address": { 00:10:08.366 "trtype": "TCP", 00:10:08.366 "adrfam": "IPv4", 00:10:08.366 "traddr": "10.0.0.3", 00:10:08.366 "trsvcid": "4420" 00:10:08.366 }, 00:10:08.366 "peer_address": { 00:10:08.366 "trtype": "TCP", 00:10:08.366 "adrfam": "IPv4", 00:10:08.366 "traddr": "10.0.0.1", 00:10:08.366 "trsvcid": "56088" 00:10:08.366 }, 00:10:08.366 "auth": { 00:10:08.366 "state": "completed", 00:10:08.366 "digest": "sha256", 00:10:08.366 "dhgroup": "ffdhe6144" 00:10:08.366 } 00:10:08.366 } 00:10:08.366 ]' 00:10:08.366 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:08.366 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:08.366 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:08.366 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:08.366 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:08.366 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:08.366 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:08.366 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:08.625 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2FmN2U1MTUzMjliYzBmZTU3M2ZjNjcwMDhjODg0OTdkMjI3NDkzZGE5NmFiN2EyQFmPpA==: --dhchap-ctrl-secret DHHC-1:03:Njg1NWJiYmZhMjkxMzRjNWM1NDE2OGNlYjc4YjgzMzRiMDVkZmJhMWQ0OGZlMjAyZjNhZDExNWY1YWQzNWE3Ntyf5n4=: 00:10:08.625 13:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --hostid cfa2def7-c8af-457f-82a0-b312efdea7f4 -l 0 --dhchap-secret DHHC-1:00:N2FmN2U1MTUzMjliYzBmZTU3M2ZjNjcwMDhjODg0OTdkMjI3NDkzZGE5NmFiN2EyQFmPpA==: --dhchap-ctrl-secret DHHC-1:03:Njg1NWJiYmZhMjkxMzRjNWM1NDE2OGNlYjc4YjgzMzRiMDVkZmJhMWQ0OGZlMjAyZjNhZDExNWY1YWQzNWE3Ntyf5n4=: 00:10:09.192 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:09.192 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:09.192 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:10:09.192 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.192 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:09.192 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.192 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:09.192 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:09.192 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:09.450 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:10:09.450 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:09.450 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:09.450 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:09.450 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:09.450 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:09.451 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:09.451 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.451 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:09.451 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.451 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:09.451 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:09.451 13:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:10.020 00:10:10.020 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:10.020 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:10.020 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:10.279 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:10.279 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:10.279 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.279 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:10.279 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.279 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:10.279 { 00:10:10.280 "cntlid": 35, 00:10:10.280 "qid": 0, 00:10:10.280 "state": "enabled", 00:10:10.280 "thread": "nvmf_tgt_poll_group_000", 00:10:10.280 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4", 00:10:10.280 "listen_address": { 00:10:10.280 "trtype": "TCP", 00:10:10.280 "adrfam": "IPv4", 00:10:10.280 "traddr": "10.0.0.3", 00:10:10.280 "trsvcid": "4420" 00:10:10.280 }, 00:10:10.280 "peer_address": { 00:10:10.280 "trtype": "TCP", 00:10:10.280 "adrfam": "IPv4", 00:10:10.280 "traddr": "10.0.0.1", 00:10:10.280 "trsvcid": "59072" 00:10:10.280 }, 00:10:10.280 "auth": { 00:10:10.280 "state": "completed", 00:10:10.280 "digest": "sha256", 00:10:10.280 "dhgroup": "ffdhe6144" 00:10:10.280 } 00:10:10.280 } 00:10:10.280 ]' 00:10:10.280 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:10.280 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:10.280 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:10.539 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:10.539 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:10.539 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:10.539 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:10.539 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:10.797 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjQ2ZDlmZjJmZTk0NGE4YmY4YzZmOTMxYzlmMmI0Y2aDcmP9: --dhchap-ctrl-secret DHHC-1:02:MmQxNTRiNGRjZDkwOWI5YmQ4MmZjZjhiMWUwOTc2ZjE0ZmEwMzc0NTgyZGVkYzdiP7GVWQ==: 00:10:10.797 13:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --hostid cfa2def7-c8af-457f-82a0-b312efdea7f4 -l 0 --dhchap-secret DHHC-1:01:ZjQ2ZDlmZjJmZTk0NGE4YmY4YzZmOTMxYzlmMmI0Y2aDcmP9: --dhchap-ctrl-secret DHHC-1:02:MmQxNTRiNGRjZDkwOWI5YmQ4MmZjZjhiMWUwOTc2ZjE0ZmEwMzc0NTgyZGVkYzdiP7GVWQ==: 00:10:11.365 13:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:11.365 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:11.365 13:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:10:11.365 13:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.365 13:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:11.365 13:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.365 13:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:11.365 13:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:11.365 13:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:11.625 13:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:10:11.625 13:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:11.625 13:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:11.625 13:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:11.625 13:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:11.625 13:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:11.625 13:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:11.625 13:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.625 13:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:11.625 13:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.625 13:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:11.625 13:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:11.625 13:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:12.193 00:10:12.193 13:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:12.193 13:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:12.193 13:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:12.452 13:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:12.452 13:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:12.452 13:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.452 13:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:12.452 13:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.452 13:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:12.452 { 00:10:12.452 "cntlid": 37, 00:10:12.452 "qid": 0, 00:10:12.452 "state": "enabled", 00:10:12.452 "thread": "nvmf_tgt_poll_group_000", 00:10:12.452 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4", 00:10:12.452 "listen_address": { 00:10:12.452 "trtype": "TCP", 00:10:12.452 "adrfam": "IPv4", 00:10:12.452 "traddr": "10.0.0.3", 00:10:12.452 "trsvcid": "4420" 00:10:12.452 }, 00:10:12.452 "peer_address": { 00:10:12.452 "trtype": "TCP", 00:10:12.452 "adrfam": "IPv4", 00:10:12.452 "traddr": "10.0.0.1", 00:10:12.452 "trsvcid": "59096" 00:10:12.452 }, 00:10:12.452 "auth": { 00:10:12.452 "state": "completed", 00:10:12.452 "digest": "sha256", 00:10:12.452 "dhgroup": "ffdhe6144" 00:10:12.452 } 00:10:12.452 } 00:10:12.452 ]' 00:10:12.452 13:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:12.452 13:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:12.452 13:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:12.452 13:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:12.452 13:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:12.452 13:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:12.452 13:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:12.452 13:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:12.712 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDY0MDZkOGNlMjc4MWQ4ZjYyNTAyOTNjNWUzNGExOTQ2NjI0OGE3ZjZlNGQxOWM2YULh6g==: --dhchap-ctrl-secret DHHC-1:01:NzY5OTJjM2UzNDQ0NjQ2NzEyYjk2ZGExZTlkZTZkM2NLb/z4: 00:10:12.712 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --hostid cfa2def7-c8af-457f-82a0-b312efdea7f4 -l 0 --dhchap-secret DHHC-1:02:ZDY0MDZkOGNlMjc4MWQ4ZjYyNTAyOTNjNWUzNGExOTQ2NjI0OGE3ZjZlNGQxOWM2YULh6g==: --dhchap-ctrl-secret DHHC-1:01:NzY5OTJjM2UzNDQ0NjQ2NzEyYjk2ZGExZTlkZTZkM2NLb/z4: 00:10:13.281 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:13.281 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:13.281 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:10:13.281 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.281 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:13.281 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.281 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:13.281 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:13.281 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:13.541 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:10:13.541 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:13.541 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:13.541 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:13.541 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:13.541 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:13.541 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --dhchap-key key3 00:10:13.541 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.541 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:13.541 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.541 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:13.541 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:13.541 13:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:14.108 00:10:14.108 13:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:14.108 13:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:14.108 13:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:14.367 13:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:14.367 13:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:14.367 13:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.367 13:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:14.367 13:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.367 13:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:14.367 { 00:10:14.367 "cntlid": 39, 00:10:14.367 "qid": 0, 00:10:14.367 "state": "enabled", 00:10:14.367 "thread": "nvmf_tgt_poll_group_000", 00:10:14.367 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4", 00:10:14.367 "listen_address": { 00:10:14.367 "trtype": "TCP", 00:10:14.367 "adrfam": "IPv4", 00:10:14.367 "traddr": "10.0.0.3", 00:10:14.367 "trsvcid": "4420" 00:10:14.367 }, 00:10:14.367 "peer_address": { 00:10:14.367 "trtype": "TCP", 00:10:14.367 "adrfam": "IPv4", 00:10:14.367 "traddr": "10.0.0.1", 00:10:14.367 "trsvcid": "59124" 00:10:14.367 }, 00:10:14.367 "auth": { 00:10:14.367 "state": "completed", 00:10:14.367 "digest": "sha256", 00:10:14.367 "dhgroup": "ffdhe6144" 00:10:14.367 } 00:10:14.367 } 00:10:14.367 ]' 00:10:14.367 13:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:14.367 13:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:14.367 13:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:14.368 13:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:14.368 13:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:14.626 13:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:14.626 13:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:14.626 13:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:14.885 13:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2ZkZjI0Njc3NWQ4YTk5NDkyZGZlZDM1NjIzMDBkY2ZmMzQ2ZTQxZmIxYjQ3NzdiZjY3YTFhNjYxMGQ4YjQ0Y2MZpOQ=: 00:10:14.885 13:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --hostid cfa2def7-c8af-457f-82a0-b312efdea7f4 -l 0 --dhchap-secret DHHC-1:03:M2ZkZjI0Njc3NWQ4YTk5NDkyZGZlZDM1NjIzMDBkY2ZmMzQ2ZTQxZmIxYjQ3NzdiZjY3YTFhNjYxMGQ4YjQ0Y2MZpOQ=: 00:10:15.454 13:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:15.454 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:15.454 13:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:10:15.454 13:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.454 13:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:15.454 13:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.454 13:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:15.454 13:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:15.454 13:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:15.454 13:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:15.713 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:10:15.713 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:15.713 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:15.713 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:15.713 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:15.713 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:15.713 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:15.713 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.713 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:15.713 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.713 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:15.713 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:15.713 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:16.318 00:10:16.590 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:16.590 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:16.590 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:16.590 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:16.590 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:16.590 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.590 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:16.590 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.590 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:16.590 { 00:10:16.590 "cntlid": 41, 00:10:16.590 "qid": 0, 00:10:16.590 "state": "enabled", 00:10:16.590 "thread": "nvmf_tgt_poll_group_000", 00:10:16.590 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4", 00:10:16.590 "listen_address": { 00:10:16.590 "trtype": "TCP", 00:10:16.590 "adrfam": "IPv4", 00:10:16.590 "traddr": "10.0.0.3", 00:10:16.590 "trsvcid": "4420" 00:10:16.590 }, 00:10:16.590 "peer_address": { 00:10:16.590 "trtype": "TCP", 00:10:16.590 "adrfam": "IPv4", 00:10:16.590 "traddr": "10.0.0.1", 00:10:16.590 "trsvcid": "59154" 00:10:16.590 }, 00:10:16.590 "auth": { 00:10:16.590 "state": "completed", 00:10:16.590 "digest": "sha256", 00:10:16.590 "dhgroup": "ffdhe8192" 00:10:16.590 } 00:10:16.590 } 00:10:16.590 ]' 00:10:16.590 13:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:16.848 13:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:16.848 13:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:16.849 13:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:16.849 13:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:16.849 13:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:16.849 13:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:16.849 13:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:17.108 13:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2FmN2U1MTUzMjliYzBmZTU3M2ZjNjcwMDhjODg0OTdkMjI3NDkzZGE5NmFiN2EyQFmPpA==: --dhchap-ctrl-secret DHHC-1:03:Njg1NWJiYmZhMjkxMzRjNWM1NDE2OGNlYjc4YjgzMzRiMDVkZmJhMWQ0OGZlMjAyZjNhZDExNWY1YWQzNWE3Ntyf5n4=: 00:10:17.108 13:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --hostid cfa2def7-c8af-457f-82a0-b312efdea7f4 -l 0 --dhchap-secret DHHC-1:00:N2FmN2U1MTUzMjliYzBmZTU3M2ZjNjcwMDhjODg0OTdkMjI3NDkzZGE5NmFiN2EyQFmPpA==: --dhchap-ctrl-secret DHHC-1:03:Njg1NWJiYmZhMjkxMzRjNWM1NDE2OGNlYjc4YjgzMzRiMDVkZmJhMWQ0OGZlMjAyZjNhZDExNWY1YWQzNWE3Ntyf5n4=: 00:10:18.045 13:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:18.045 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:18.045 13:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:10:18.045 13:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.045 13:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:18.045 13:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.045 13:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:18.045 13:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:18.045 13:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:18.045 13:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:10:18.045 13:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:18.045 13:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:18.045 13:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:18.045 13:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:18.045 13:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:18.045 13:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:18.045 13:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.045 13:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:18.045 13:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.045 13:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:18.045 13:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:18.045 13:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:18.613 00:10:18.873 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:18.873 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:18.873 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:19.133 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:19.133 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:19.133 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.133 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:19.133 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.133 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:19.133 { 00:10:19.133 "cntlid": 43, 00:10:19.133 "qid": 0, 00:10:19.133 "state": "enabled", 00:10:19.133 "thread": "nvmf_tgt_poll_group_000", 00:10:19.133 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4", 00:10:19.133 "listen_address": { 00:10:19.133 "trtype": "TCP", 00:10:19.133 "adrfam": "IPv4", 00:10:19.133 "traddr": "10.0.0.3", 00:10:19.133 "trsvcid": "4420" 00:10:19.133 }, 00:10:19.133 "peer_address": { 00:10:19.133 "trtype": "TCP", 00:10:19.133 "adrfam": "IPv4", 00:10:19.133 "traddr": "10.0.0.1", 00:10:19.133 "trsvcid": "59182" 00:10:19.133 }, 00:10:19.133 "auth": { 00:10:19.133 "state": "completed", 00:10:19.133 "digest": "sha256", 00:10:19.133 "dhgroup": "ffdhe8192" 00:10:19.133 } 00:10:19.133 } 00:10:19.133 ]' 00:10:19.133 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:19.133 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:19.133 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:19.133 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:19.133 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:19.133 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:19.133 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:19.133 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:19.701 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjQ2ZDlmZjJmZTk0NGE4YmY4YzZmOTMxYzlmMmI0Y2aDcmP9: --dhchap-ctrl-secret DHHC-1:02:MmQxNTRiNGRjZDkwOWI5YmQ4MmZjZjhiMWUwOTc2ZjE0ZmEwMzc0NTgyZGVkYzdiP7GVWQ==: 00:10:19.701 13:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --hostid cfa2def7-c8af-457f-82a0-b312efdea7f4 -l 0 --dhchap-secret DHHC-1:01:ZjQ2ZDlmZjJmZTk0NGE4YmY4YzZmOTMxYzlmMmI0Y2aDcmP9: --dhchap-ctrl-secret DHHC-1:02:MmQxNTRiNGRjZDkwOWI5YmQ4MmZjZjhiMWUwOTc2ZjE0ZmEwMzc0NTgyZGVkYzdiP7GVWQ==: 00:10:20.269 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:20.269 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:20.269 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:10:20.269 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.269 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:20.269 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.269 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:20.269 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:20.269 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:20.529 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:10:20.529 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:20.529 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:20.529 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:20.529 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:20.529 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:20.529 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:20.529 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.529 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:20.529 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.529 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:20.529 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:20.529 13:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:21.097 00:10:21.097 13:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:21.097 13:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:21.097 13:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:21.357 13:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:21.357 13:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:21.357 13:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.357 13:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:21.357 13:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.357 13:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:21.357 { 00:10:21.357 "cntlid": 45, 00:10:21.357 "qid": 0, 00:10:21.357 "state": "enabled", 00:10:21.357 "thread": "nvmf_tgt_poll_group_000", 00:10:21.357 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4", 00:10:21.357 "listen_address": { 00:10:21.357 "trtype": "TCP", 00:10:21.357 "adrfam": "IPv4", 00:10:21.357 "traddr": "10.0.0.3", 00:10:21.357 "trsvcid": "4420" 00:10:21.357 }, 00:10:21.357 "peer_address": { 00:10:21.357 "trtype": "TCP", 00:10:21.357 "adrfam": "IPv4", 00:10:21.357 "traddr": "10.0.0.1", 00:10:21.357 "trsvcid": "38262" 00:10:21.357 }, 00:10:21.357 "auth": { 00:10:21.357 "state": "completed", 00:10:21.357 "digest": "sha256", 00:10:21.357 "dhgroup": "ffdhe8192" 00:10:21.357 } 00:10:21.357 } 00:10:21.357 ]' 00:10:21.357 13:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:21.616 13:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:21.616 13:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:21.616 13:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:21.616 13:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:21.616 13:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:21.616 13:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:21.616 13:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:21.876 13:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDY0MDZkOGNlMjc4MWQ4ZjYyNTAyOTNjNWUzNGExOTQ2NjI0OGE3ZjZlNGQxOWM2YULh6g==: --dhchap-ctrl-secret DHHC-1:01:NzY5OTJjM2UzNDQ0NjQ2NzEyYjk2ZGExZTlkZTZkM2NLb/z4: 00:10:21.876 13:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --hostid cfa2def7-c8af-457f-82a0-b312efdea7f4 -l 0 --dhchap-secret DHHC-1:02:ZDY0MDZkOGNlMjc4MWQ4ZjYyNTAyOTNjNWUzNGExOTQ2NjI0OGE3ZjZlNGQxOWM2YULh6g==: --dhchap-ctrl-secret DHHC-1:01:NzY5OTJjM2UzNDQ0NjQ2NzEyYjk2ZGExZTlkZTZkM2NLb/z4: 00:10:22.445 13:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:22.445 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:22.445 13:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:10:22.445 13:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.445 13:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:22.705 13:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.705 13:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:22.705 13:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:22.705 13:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:22.965 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:10:22.965 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:22.965 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:22.965 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:22.965 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:22.965 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:22.965 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --dhchap-key key3 00:10:22.965 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.965 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:22.965 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.965 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:22.965 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:22.965 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:23.533 00:10:23.533 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:23.533 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:23.533 13:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:23.791 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:23.791 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:23.791 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.791 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:23.791 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.791 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:23.791 { 00:10:23.791 "cntlid": 47, 00:10:23.791 "qid": 0, 00:10:23.791 "state": "enabled", 00:10:23.791 "thread": "nvmf_tgt_poll_group_000", 00:10:23.791 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4", 00:10:23.791 "listen_address": { 00:10:23.791 "trtype": "TCP", 00:10:23.791 "adrfam": "IPv4", 00:10:23.791 "traddr": "10.0.0.3", 00:10:23.791 "trsvcid": "4420" 00:10:23.791 }, 00:10:23.791 "peer_address": { 00:10:23.791 "trtype": "TCP", 00:10:23.791 "adrfam": "IPv4", 00:10:23.791 "traddr": "10.0.0.1", 00:10:23.791 "trsvcid": "38272" 00:10:23.791 }, 00:10:23.791 "auth": { 00:10:23.791 "state": "completed", 00:10:23.791 "digest": "sha256", 00:10:23.791 "dhgroup": "ffdhe8192" 00:10:23.791 } 00:10:23.791 } 00:10:23.791 ]' 00:10:23.791 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:23.791 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:23.791 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:24.049 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:24.049 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:24.050 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:24.050 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:24.050 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:24.309 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2ZkZjI0Njc3NWQ4YTk5NDkyZGZlZDM1NjIzMDBkY2ZmMzQ2ZTQxZmIxYjQ3NzdiZjY3YTFhNjYxMGQ4YjQ0Y2MZpOQ=: 00:10:24.309 13:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --hostid cfa2def7-c8af-457f-82a0-b312efdea7f4 -l 0 --dhchap-secret DHHC-1:03:M2ZkZjI0Njc3NWQ4YTk5NDkyZGZlZDM1NjIzMDBkY2ZmMzQ2ZTQxZmIxYjQ3NzdiZjY3YTFhNjYxMGQ4YjQ0Y2MZpOQ=: 00:10:24.877 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:24.877 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:24.877 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:10:24.877 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.877 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:24.877 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.877 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:10:24.877 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:24.877 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:24.877 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:24.877 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:25.135 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:10:25.135 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:25.135 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:25.135 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:25.135 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:25.135 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:25.135 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:25.135 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.135 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:25.135 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.135 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:25.135 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:25.135 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:25.393 00:10:25.393 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:25.393 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:25.393 13:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:25.959 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:25.959 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:25.959 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.959 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:25.959 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.959 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:25.959 { 00:10:25.959 "cntlid": 49, 00:10:25.959 "qid": 0, 00:10:25.959 "state": "enabled", 00:10:25.959 "thread": "nvmf_tgt_poll_group_000", 00:10:25.959 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4", 00:10:25.959 "listen_address": { 00:10:25.959 "trtype": "TCP", 00:10:25.959 "adrfam": "IPv4", 00:10:25.959 "traddr": "10.0.0.3", 00:10:25.959 "trsvcid": "4420" 00:10:25.959 }, 00:10:25.959 "peer_address": { 00:10:25.959 "trtype": "TCP", 00:10:25.959 "adrfam": "IPv4", 00:10:25.959 "traddr": "10.0.0.1", 00:10:25.959 "trsvcid": "38302" 00:10:25.959 }, 00:10:25.959 "auth": { 00:10:25.959 "state": "completed", 00:10:25.959 "digest": "sha384", 00:10:25.959 "dhgroup": "null" 00:10:25.959 } 00:10:25.959 } 00:10:25.959 ]' 00:10:25.959 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:25.959 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:25.959 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:25.959 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:25.959 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:25.959 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:25.959 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:25.959 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:26.217 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2FmN2U1MTUzMjliYzBmZTU3M2ZjNjcwMDhjODg0OTdkMjI3NDkzZGE5NmFiN2EyQFmPpA==: --dhchap-ctrl-secret DHHC-1:03:Njg1NWJiYmZhMjkxMzRjNWM1NDE2OGNlYjc4YjgzMzRiMDVkZmJhMWQ0OGZlMjAyZjNhZDExNWY1YWQzNWE3Ntyf5n4=: 00:10:26.217 13:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --hostid cfa2def7-c8af-457f-82a0-b312efdea7f4 -l 0 --dhchap-secret DHHC-1:00:N2FmN2U1MTUzMjliYzBmZTU3M2ZjNjcwMDhjODg0OTdkMjI3NDkzZGE5NmFiN2EyQFmPpA==: --dhchap-ctrl-secret DHHC-1:03:Njg1NWJiYmZhMjkxMzRjNWM1NDE2OGNlYjc4YjgzMzRiMDVkZmJhMWQ0OGZlMjAyZjNhZDExNWY1YWQzNWE3Ntyf5n4=: 00:10:26.784 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:26.785 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:26.785 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:10:26.785 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.785 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:26.785 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.785 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:26.785 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:26.785 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:27.043 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:10:27.043 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:27.043 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:27.043 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:27.043 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:27.043 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:27.043 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:27.043 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.043 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:27.043 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.043 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:27.043 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:27.043 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:27.301 00:10:27.301 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:27.301 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:27.301 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:27.868 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:27.868 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:27.868 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.868 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:27.868 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.868 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:27.868 { 00:10:27.868 "cntlid": 51, 00:10:27.868 "qid": 0, 00:10:27.868 "state": "enabled", 00:10:27.868 "thread": "nvmf_tgt_poll_group_000", 00:10:27.868 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4", 00:10:27.868 "listen_address": { 00:10:27.868 "trtype": "TCP", 00:10:27.868 "adrfam": "IPv4", 00:10:27.868 "traddr": "10.0.0.3", 00:10:27.868 "trsvcid": "4420" 00:10:27.868 }, 00:10:27.868 "peer_address": { 00:10:27.868 "trtype": "TCP", 00:10:27.868 "adrfam": "IPv4", 00:10:27.868 "traddr": "10.0.0.1", 00:10:27.868 "trsvcid": "38326" 00:10:27.868 }, 00:10:27.868 "auth": { 00:10:27.868 "state": "completed", 00:10:27.868 "digest": "sha384", 00:10:27.868 "dhgroup": "null" 00:10:27.868 } 00:10:27.868 } 00:10:27.868 ]' 00:10:27.868 13:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:27.868 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:27.868 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:27.868 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:27.868 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:27.868 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:27.868 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:27.868 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:28.125 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjQ2ZDlmZjJmZTk0NGE4YmY4YzZmOTMxYzlmMmI0Y2aDcmP9: --dhchap-ctrl-secret DHHC-1:02:MmQxNTRiNGRjZDkwOWI5YmQ4MmZjZjhiMWUwOTc2ZjE0ZmEwMzc0NTgyZGVkYzdiP7GVWQ==: 00:10:28.125 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --hostid cfa2def7-c8af-457f-82a0-b312efdea7f4 -l 0 --dhchap-secret DHHC-1:01:ZjQ2ZDlmZjJmZTk0NGE4YmY4YzZmOTMxYzlmMmI0Y2aDcmP9: --dhchap-ctrl-secret DHHC-1:02:MmQxNTRiNGRjZDkwOWI5YmQ4MmZjZjhiMWUwOTc2ZjE0ZmEwMzc0NTgyZGVkYzdiP7GVWQ==: 00:10:28.691 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:28.691 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:28.691 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:10:28.691 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.691 13:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:28.691 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.691 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:28.691 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:28.691 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:28.950 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:10:28.950 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:28.950 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:28.950 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:28.950 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:28.950 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:28.950 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:28.950 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.950 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:28.950 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.950 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:28.950 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:28.950 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:29.209 00:10:29.209 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:29.209 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:29.209 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:29.468 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:29.468 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:29.468 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.468 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:29.468 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.468 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:29.468 { 00:10:29.468 "cntlid": 53, 00:10:29.468 "qid": 0, 00:10:29.468 "state": "enabled", 00:10:29.468 "thread": "nvmf_tgt_poll_group_000", 00:10:29.468 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4", 00:10:29.468 "listen_address": { 00:10:29.468 "trtype": "TCP", 00:10:29.468 "adrfam": "IPv4", 00:10:29.468 "traddr": "10.0.0.3", 00:10:29.468 "trsvcid": "4420" 00:10:29.468 }, 00:10:29.468 "peer_address": { 00:10:29.468 "trtype": "TCP", 00:10:29.468 "adrfam": "IPv4", 00:10:29.468 "traddr": "10.0.0.1", 00:10:29.468 "trsvcid": "38340" 00:10:29.468 }, 00:10:29.468 "auth": { 00:10:29.468 "state": "completed", 00:10:29.468 "digest": "sha384", 00:10:29.468 "dhgroup": "null" 00:10:29.468 } 00:10:29.468 } 00:10:29.468 ]' 00:10:29.468 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:29.726 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:29.726 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:29.726 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:29.726 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:29.726 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:29.726 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:29.726 13:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:29.984 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDY0MDZkOGNlMjc4MWQ4ZjYyNTAyOTNjNWUzNGExOTQ2NjI0OGE3ZjZlNGQxOWM2YULh6g==: --dhchap-ctrl-secret DHHC-1:01:NzY5OTJjM2UzNDQ0NjQ2NzEyYjk2ZGExZTlkZTZkM2NLb/z4: 00:10:29.984 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --hostid cfa2def7-c8af-457f-82a0-b312efdea7f4 -l 0 --dhchap-secret DHHC-1:02:ZDY0MDZkOGNlMjc4MWQ4ZjYyNTAyOTNjNWUzNGExOTQ2NjI0OGE3ZjZlNGQxOWM2YULh6g==: --dhchap-ctrl-secret DHHC-1:01:NzY5OTJjM2UzNDQ0NjQ2NzEyYjk2ZGExZTlkZTZkM2NLb/z4: 00:10:30.550 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:30.550 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:30.550 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:10:30.550 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.550 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:30.808 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.809 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:30.809 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:30.809 13:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:30.809 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:10:30.809 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:30.809 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:30.809 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:30.809 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:30.809 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:30.809 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --dhchap-key key3 00:10:30.809 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.809 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:30.809 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.809 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:30.809 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:30.809 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:31.377 00:10:31.377 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:31.377 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:31.377 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:31.646 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:31.646 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:31.646 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.646 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:31.646 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.646 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:31.646 { 00:10:31.646 "cntlid": 55, 00:10:31.646 "qid": 0, 00:10:31.646 "state": "enabled", 00:10:31.646 "thread": "nvmf_tgt_poll_group_000", 00:10:31.646 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4", 00:10:31.646 "listen_address": { 00:10:31.646 "trtype": "TCP", 00:10:31.646 "adrfam": "IPv4", 00:10:31.646 "traddr": "10.0.0.3", 00:10:31.646 "trsvcid": "4420" 00:10:31.646 }, 00:10:31.646 "peer_address": { 00:10:31.646 "trtype": "TCP", 00:10:31.646 "adrfam": "IPv4", 00:10:31.646 "traddr": "10.0.0.1", 00:10:31.646 "trsvcid": "60908" 00:10:31.646 }, 00:10:31.646 "auth": { 00:10:31.646 "state": "completed", 00:10:31.646 "digest": "sha384", 00:10:31.646 "dhgroup": "null" 00:10:31.646 } 00:10:31.646 } 00:10:31.646 ]' 00:10:31.646 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:31.646 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:31.646 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:31.646 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:31.646 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:31.646 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:31.646 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:31.646 13:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:31.905 13:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2ZkZjI0Njc3NWQ4YTk5NDkyZGZlZDM1NjIzMDBkY2ZmMzQ2ZTQxZmIxYjQ3NzdiZjY3YTFhNjYxMGQ4YjQ0Y2MZpOQ=: 00:10:31.905 13:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --hostid cfa2def7-c8af-457f-82a0-b312efdea7f4 -l 0 --dhchap-secret DHHC-1:03:M2ZkZjI0Njc3NWQ4YTk5NDkyZGZlZDM1NjIzMDBkY2ZmMzQ2ZTQxZmIxYjQ3NzdiZjY3YTFhNjYxMGQ4YjQ0Y2MZpOQ=: 00:10:32.844 13:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:32.844 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:32.844 13:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:10:32.844 13:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.844 13:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:32.844 13:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.844 13:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:32.844 13:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:32.844 13:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:32.844 13:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:32.844 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:10:32.844 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:32.844 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:32.844 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:32.844 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:32.844 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:32.844 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:32.844 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.844 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:32.844 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.844 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:32.844 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:32.844 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:33.412 00:10:33.412 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:33.412 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:33.412 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:33.672 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:33.672 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:33.672 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.672 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:33.672 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.672 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:33.672 { 00:10:33.672 "cntlid": 57, 00:10:33.672 "qid": 0, 00:10:33.672 "state": "enabled", 00:10:33.672 "thread": "nvmf_tgt_poll_group_000", 00:10:33.672 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4", 00:10:33.672 "listen_address": { 00:10:33.672 "trtype": "TCP", 00:10:33.672 "adrfam": "IPv4", 00:10:33.672 "traddr": "10.0.0.3", 00:10:33.672 "trsvcid": "4420" 00:10:33.672 }, 00:10:33.672 "peer_address": { 00:10:33.672 "trtype": "TCP", 00:10:33.672 "adrfam": "IPv4", 00:10:33.672 "traddr": "10.0.0.1", 00:10:33.672 "trsvcid": "60936" 00:10:33.672 }, 00:10:33.672 "auth": { 00:10:33.672 "state": "completed", 00:10:33.672 "digest": "sha384", 00:10:33.672 "dhgroup": "ffdhe2048" 00:10:33.672 } 00:10:33.672 } 00:10:33.672 ]' 00:10:33.672 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:33.672 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:33.672 13:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:33.672 13:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:33.672 13:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:33.931 13:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:33.931 13:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:33.931 13:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:33.931 13:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2FmN2U1MTUzMjliYzBmZTU3M2ZjNjcwMDhjODg0OTdkMjI3NDkzZGE5NmFiN2EyQFmPpA==: --dhchap-ctrl-secret DHHC-1:03:Njg1NWJiYmZhMjkxMzRjNWM1NDE2OGNlYjc4YjgzMzRiMDVkZmJhMWQ0OGZlMjAyZjNhZDExNWY1YWQzNWE3Ntyf5n4=: 00:10:33.931 13:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --hostid cfa2def7-c8af-457f-82a0-b312efdea7f4 -l 0 --dhchap-secret DHHC-1:00:N2FmN2U1MTUzMjliYzBmZTU3M2ZjNjcwMDhjODg0OTdkMjI3NDkzZGE5NmFiN2EyQFmPpA==: --dhchap-ctrl-secret DHHC-1:03:Njg1NWJiYmZhMjkxMzRjNWM1NDE2OGNlYjc4YjgzMzRiMDVkZmJhMWQ0OGZlMjAyZjNhZDExNWY1YWQzNWE3Ntyf5n4=: 00:10:34.869 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:34.869 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:34.869 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:10:34.869 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.869 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:34.869 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.869 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:34.869 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:34.869 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:35.128 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:10:35.129 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:35.129 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:35.129 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:35.129 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:35.129 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:35.129 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:35.129 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.129 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:35.129 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.129 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:35.129 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:35.129 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:35.389 00:10:35.389 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:35.389 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:35.389 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:35.649 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:35.649 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:35.649 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.649 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:35.649 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.649 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:35.649 { 00:10:35.649 "cntlid": 59, 00:10:35.649 "qid": 0, 00:10:35.649 "state": "enabled", 00:10:35.649 "thread": "nvmf_tgt_poll_group_000", 00:10:35.649 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4", 00:10:35.649 "listen_address": { 00:10:35.649 "trtype": "TCP", 00:10:35.649 "adrfam": "IPv4", 00:10:35.649 "traddr": "10.0.0.3", 00:10:35.649 "trsvcid": "4420" 00:10:35.649 }, 00:10:35.649 "peer_address": { 00:10:35.649 "trtype": "TCP", 00:10:35.649 "adrfam": "IPv4", 00:10:35.649 "traddr": "10.0.0.1", 00:10:35.649 "trsvcid": "60968" 00:10:35.649 }, 00:10:35.649 "auth": { 00:10:35.649 "state": "completed", 00:10:35.649 "digest": "sha384", 00:10:35.649 "dhgroup": "ffdhe2048" 00:10:35.649 } 00:10:35.649 } 00:10:35.649 ]' 00:10:35.649 13:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:35.649 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:35.649 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:35.912 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:35.912 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:35.912 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:35.912 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:35.912 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:36.171 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjQ2ZDlmZjJmZTk0NGE4YmY4YzZmOTMxYzlmMmI0Y2aDcmP9: --dhchap-ctrl-secret DHHC-1:02:MmQxNTRiNGRjZDkwOWI5YmQ4MmZjZjhiMWUwOTc2ZjE0ZmEwMzc0NTgyZGVkYzdiP7GVWQ==: 00:10:36.171 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --hostid cfa2def7-c8af-457f-82a0-b312efdea7f4 -l 0 --dhchap-secret DHHC-1:01:ZjQ2ZDlmZjJmZTk0NGE4YmY4YzZmOTMxYzlmMmI0Y2aDcmP9: --dhchap-ctrl-secret DHHC-1:02:MmQxNTRiNGRjZDkwOWI5YmQ4MmZjZjhiMWUwOTc2ZjE0ZmEwMzc0NTgyZGVkYzdiP7GVWQ==: 00:10:36.818 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:36.818 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:36.818 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:10:36.818 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.818 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:36.818 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.818 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:36.818 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:36.818 13:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:37.076 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:10:37.076 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:37.076 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:37.076 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:37.076 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:37.076 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:37.076 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:37.076 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.076 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:37.076 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.076 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:37.076 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:37.076 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:37.335 00:10:37.335 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:37.335 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:37.335 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:37.593 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:37.593 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:37.593 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.593 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:37.593 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.593 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:37.593 { 00:10:37.593 "cntlid": 61, 00:10:37.593 "qid": 0, 00:10:37.593 "state": "enabled", 00:10:37.593 "thread": "nvmf_tgt_poll_group_000", 00:10:37.593 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4", 00:10:37.593 "listen_address": { 00:10:37.593 "trtype": "TCP", 00:10:37.593 "adrfam": "IPv4", 00:10:37.593 "traddr": "10.0.0.3", 00:10:37.593 "trsvcid": "4420" 00:10:37.593 }, 00:10:37.593 "peer_address": { 00:10:37.593 "trtype": "TCP", 00:10:37.593 "adrfam": "IPv4", 00:10:37.593 "traddr": "10.0.0.1", 00:10:37.593 "trsvcid": "60984" 00:10:37.593 }, 00:10:37.593 "auth": { 00:10:37.593 "state": "completed", 00:10:37.593 "digest": "sha384", 00:10:37.593 "dhgroup": "ffdhe2048" 00:10:37.593 } 00:10:37.593 } 00:10:37.593 ]' 00:10:37.593 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:37.593 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:37.593 13:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:37.851 13:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:37.851 13:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:37.851 13:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:37.851 13:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:37.851 13:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:38.110 13:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDY0MDZkOGNlMjc4MWQ4ZjYyNTAyOTNjNWUzNGExOTQ2NjI0OGE3ZjZlNGQxOWM2YULh6g==: --dhchap-ctrl-secret DHHC-1:01:NzY5OTJjM2UzNDQ0NjQ2NzEyYjk2ZGExZTlkZTZkM2NLb/z4: 00:10:38.110 13:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --hostid cfa2def7-c8af-457f-82a0-b312efdea7f4 -l 0 --dhchap-secret DHHC-1:02:ZDY0MDZkOGNlMjc4MWQ4ZjYyNTAyOTNjNWUzNGExOTQ2NjI0OGE3ZjZlNGQxOWM2YULh6g==: --dhchap-ctrl-secret DHHC-1:01:NzY5OTJjM2UzNDQ0NjQ2NzEyYjk2ZGExZTlkZTZkM2NLb/z4: 00:10:38.678 13:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:38.678 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:38.678 13:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:10:38.678 13:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.678 13:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:38.678 13:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.678 13:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:38.678 13:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:38.678 13:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:38.938 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:10:38.938 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:38.938 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:38.938 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:38.938 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:38.938 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:38.938 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --dhchap-key key3 00:10:38.938 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.938 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:38.938 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.938 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:38.938 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:38.938 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:39.197 00:10:39.197 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:39.197 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:39.197 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:39.457 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:39.457 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:39.457 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.457 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:39.457 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.457 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:39.457 { 00:10:39.457 "cntlid": 63, 00:10:39.457 "qid": 0, 00:10:39.457 "state": "enabled", 00:10:39.457 "thread": "nvmf_tgt_poll_group_000", 00:10:39.457 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4", 00:10:39.457 "listen_address": { 00:10:39.457 "trtype": "TCP", 00:10:39.457 "adrfam": "IPv4", 00:10:39.457 "traddr": "10.0.0.3", 00:10:39.457 "trsvcid": "4420" 00:10:39.457 }, 00:10:39.457 "peer_address": { 00:10:39.457 "trtype": "TCP", 00:10:39.457 "adrfam": "IPv4", 00:10:39.457 "traddr": "10.0.0.1", 00:10:39.457 "trsvcid": "32776" 00:10:39.457 }, 00:10:39.457 "auth": { 00:10:39.457 "state": "completed", 00:10:39.457 "digest": "sha384", 00:10:39.457 "dhgroup": "ffdhe2048" 00:10:39.457 } 00:10:39.457 } 00:10:39.457 ]' 00:10:39.457 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:39.457 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:39.457 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:39.457 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:39.457 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:39.457 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:39.457 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:39.457 13:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:39.716 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2ZkZjI0Njc3NWQ4YTk5NDkyZGZlZDM1NjIzMDBkY2ZmMzQ2ZTQxZmIxYjQ3NzdiZjY3YTFhNjYxMGQ4YjQ0Y2MZpOQ=: 00:10:39.716 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --hostid cfa2def7-c8af-457f-82a0-b312efdea7f4 -l 0 --dhchap-secret DHHC-1:03:M2ZkZjI0Njc3NWQ4YTk5NDkyZGZlZDM1NjIzMDBkY2ZmMzQ2ZTQxZmIxYjQ3NzdiZjY3YTFhNjYxMGQ4YjQ0Y2MZpOQ=: 00:10:40.285 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:40.285 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:40.285 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:10:40.285 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.285 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:40.285 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.285 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:40.285 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:40.285 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:40.285 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:40.544 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:10:40.544 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:40.544 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:40.544 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:40.544 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:40.544 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:40.544 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:40.544 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.544 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:40.544 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.544 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:40.544 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:40.544 13:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:40.803 00:10:40.803 13:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:40.803 13:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:40.803 13:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:41.062 13:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:41.063 13:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:41.063 13:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.063 13:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:41.063 13:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.063 13:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:41.063 { 00:10:41.063 "cntlid": 65, 00:10:41.063 "qid": 0, 00:10:41.063 "state": "enabled", 00:10:41.063 "thread": "nvmf_tgt_poll_group_000", 00:10:41.063 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4", 00:10:41.063 "listen_address": { 00:10:41.063 "trtype": "TCP", 00:10:41.063 "adrfam": "IPv4", 00:10:41.063 "traddr": "10.0.0.3", 00:10:41.063 "trsvcid": "4420" 00:10:41.063 }, 00:10:41.063 "peer_address": { 00:10:41.063 "trtype": "TCP", 00:10:41.063 "adrfam": "IPv4", 00:10:41.063 "traddr": "10.0.0.1", 00:10:41.063 "trsvcid": "40400" 00:10:41.063 }, 00:10:41.063 "auth": { 00:10:41.063 "state": "completed", 00:10:41.063 "digest": "sha384", 00:10:41.063 "dhgroup": "ffdhe3072" 00:10:41.063 } 00:10:41.063 } 00:10:41.063 ]' 00:10:41.063 13:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:41.322 13:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:41.322 13:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:41.322 13:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:41.322 13:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:41.322 13:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:41.322 13:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:41.322 13:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:41.582 13:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2FmN2U1MTUzMjliYzBmZTU3M2ZjNjcwMDhjODg0OTdkMjI3NDkzZGE5NmFiN2EyQFmPpA==: --dhchap-ctrl-secret DHHC-1:03:Njg1NWJiYmZhMjkxMzRjNWM1NDE2OGNlYjc4YjgzMzRiMDVkZmJhMWQ0OGZlMjAyZjNhZDExNWY1YWQzNWE3Ntyf5n4=: 00:10:41.582 13:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --hostid cfa2def7-c8af-457f-82a0-b312efdea7f4 -l 0 --dhchap-secret DHHC-1:00:N2FmN2U1MTUzMjliYzBmZTU3M2ZjNjcwMDhjODg0OTdkMjI3NDkzZGE5NmFiN2EyQFmPpA==: --dhchap-ctrl-secret DHHC-1:03:Njg1NWJiYmZhMjkxMzRjNWM1NDE2OGNlYjc4YjgzMzRiMDVkZmJhMWQ0OGZlMjAyZjNhZDExNWY1YWQzNWE3Ntyf5n4=: 00:10:42.150 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:42.150 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:42.150 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:10:42.150 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.150 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.150 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.150 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:42.150 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:42.150 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:42.409 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:10:42.409 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:42.409 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:42.409 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:42.409 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:42.409 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:42.409 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:42.409 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.409 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.409 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.410 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:42.410 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:42.410 13:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:42.668 00:10:42.668 13:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:42.668 13:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:42.668 13:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:43.237 13:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:43.237 13:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:43.237 13:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.237 13:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:43.237 13:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.237 13:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:43.237 { 00:10:43.237 "cntlid": 67, 00:10:43.237 "qid": 0, 00:10:43.237 "state": "enabled", 00:10:43.237 "thread": "nvmf_tgt_poll_group_000", 00:10:43.237 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4", 00:10:43.237 "listen_address": { 00:10:43.237 "trtype": "TCP", 00:10:43.237 "adrfam": "IPv4", 00:10:43.237 "traddr": "10.0.0.3", 00:10:43.237 "trsvcid": "4420" 00:10:43.237 }, 00:10:43.237 "peer_address": { 00:10:43.237 "trtype": "TCP", 00:10:43.237 "adrfam": "IPv4", 00:10:43.237 "traddr": "10.0.0.1", 00:10:43.237 "trsvcid": "40418" 00:10:43.237 }, 00:10:43.237 "auth": { 00:10:43.237 "state": "completed", 00:10:43.237 "digest": "sha384", 00:10:43.237 "dhgroup": "ffdhe3072" 00:10:43.237 } 00:10:43.237 } 00:10:43.237 ]' 00:10:43.237 13:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:43.237 13:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:43.237 13:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:43.237 13:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:43.237 13:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:43.237 13:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:43.237 13:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:43.237 13:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:43.497 13:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjQ2ZDlmZjJmZTk0NGE4YmY4YzZmOTMxYzlmMmI0Y2aDcmP9: --dhchap-ctrl-secret DHHC-1:02:MmQxNTRiNGRjZDkwOWI5YmQ4MmZjZjhiMWUwOTc2ZjE0ZmEwMzc0NTgyZGVkYzdiP7GVWQ==: 00:10:43.497 13:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --hostid cfa2def7-c8af-457f-82a0-b312efdea7f4 -l 0 --dhchap-secret DHHC-1:01:ZjQ2ZDlmZjJmZTk0NGE4YmY4YzZmOTMxYzlmMmI0Y2aDcmP9: --dhchap-ctrl-secret DHHC-1:02:MmQxNTRiNGRjZDkwOWI5YmQ4MmZjZjhiMWUwOTc2ZjE0ZmEwMzc0NTgyZGVkYzdiP7GVWQ==: 00:10:44.066 13:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:44.066 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:44.066 13:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:10:44.066 13:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.066 13:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.066 13:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.066 13:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:44.066 13:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:44.066 13:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:44.325 13:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:10:44.325 13:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:44.325 13:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:44.325 13:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:44.325 13:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:44.325 13:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:44.325 13:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:44.325 13:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.325 13:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.326 13:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.326 13:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:44.326 13:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:44.326 13:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:44.585 00:10:44.585 13:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:44.585 13:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:44.585 13:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:44.844 13:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:44.844 13:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:44.844 13:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.844 13:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:45.103 13:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.103 13:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:45.103 { 00:10:45.103 "cntlid": 69, 00:10:45.103 "qid": 0, 00:10:45.103 "state": "enabled", 00:10:45.103 "thread": "nvmf_tgt_poll_group_000", 00:10:45.103 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4", 00:10:45.103 "listen_address": { 00:10:45.103 "trtype": "TCP", 00:10:45.104 "adrfam": "IPv4", 00:10:45.104 "traddr": "10.0.0.3", 00:10:45.104 "trsvcid": "4420" 00:10:45.104 }, 00:10:45.104 "peer_address": { 00:10:45.104 "trtype": "TCP", 00:10:45.104 "adrfam": "IPv4", 00:10:45.104 "traddr": "10.0.0.1", 00:10:45.104 "trsvcid": "40438" 00:10:45.104 }, 00:10:45.104 "auth": { 00:10:45.104 "state": "completed", 00:10:45.104 "digest": "sha384", 00:10:45.104 "dhgroup": "ffdhe3072" 00:10:45.104 } 00:10:45.104 } 00:10:45.104 ]' 00:10:45.104 13:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:45.104 13:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:45.104 13:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:45.104 13:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:45.104 13:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:45.104 13:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:45.104 13:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:45.104 13:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:45.363 13:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDY0MDZkOGNlMjc4MWQ4ZjYyNTAyOTNjNWUzNGExOTQ2NjI0OGE3ZjZlNGQxOWM2YULh6g==: --dhchap-ctrl-secret DHHC-1:01:NzY5OTJjM2UzNDQ0NjQ2NzEyYjk2ZGExZTlkZTZkM2NLb/z4: 00:10:45.363 13:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --hostid cfa2def7-c8af-457f-82a0-b312efdea7f4 -l 0 --dhchap-secret DHHC-1:02:ZDY0MDZkOGNlMjc4MWQ4ZjYyNTAyOTNjNWUzNGExOTQ2NjI0OGE3ZjZlNGQxOWM2YULh6g==: --dhchap-ctrl-secret DHHC-1:01:NzY5OTJjM2UzNDQ0NjQ2NzEyYjk2ZGExZTlkZTZkM2NLb/z4: 00:10:46.298 13:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:46.298 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:46.298 13:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:10:46.298 13:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.298 13:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:46.298 13:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.298 13:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:46.298 13:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:46.298 13:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:46.557 13:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:10:46.557 13:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:46.557 13:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:46.557 13:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:46.557 13:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:46.557 13:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:46.557 13:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --dhchap-key key3 00:10:46.557 13:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.557 13:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:46.557 13:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.557 13:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:46.557 13:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:46.557 13:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:46.815 00:10:46.815 13:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:46.815 13:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:46.815 13:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:47.074 13:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:47.074 13:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:47.074 13:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.074 13:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:47.074 13:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.074 13:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:47.074 { 00:10:47.074 "cntlid": 71, 00:10:47.074 "qid": 0, 00:10:47.074 "state": "enabled", 00:10:47.074 "thread": "nvmf_tgt_poll_group_000", 00:10:47.074 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4", 00:10:47.074 "listen_address": { 00:10:47.074 "trtype": "TCP", 00:10:47.074 "adrfam": "IPv4", 00:10:47.074 "traddr": "10.0.0.3", 00:10:47.074 "trsvcid": "4420" 00:10:47.074 }, 00:10:47.074 "peer_address": { 00:10:47.074 "trtype": "TCP", 00:10:47.074 "adrfam": "IPv4", 00:10:47.074 "traddr": "10.0.0.1", 00:10:47.074 "trsvcid": "40468" 00:10:47.074 }, 00:10:47.074 "auth": { 00:10:47.074 "state": "completed", 00:10:47.074 "digest": "sha384", 00:10:47.074 "dhgroup": "ffdhe3072" 00:10:47.074 } 00:10:47.074 } 00:10:47.074 ]' 00:10:47.074 13:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:47.333 13:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:47.333 13:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:47.333 13:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:47.333 13:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:47.333 13:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:47.333 13:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:47.333 13:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:47.592 13:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2ZkZjI0Njc3NWQ4YTk5NDkyZGZlZDM1NjIzMDBkY2ZmMzQ2ZTQxZmIxYjQ3NzdiZjY3YTFhNjYxMGQ4YjQ0Y2MZpOQ=: 00:10:47.592 13:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --hostid cfa2def7-c8af-457f-82a0-b312efdea7f4 -l 0 --dhchap-secret DHHC-1:03:M2ZkZjI0Njc3NWQ4YTk5NDkyZGZlZDM1NjIzMDBkY2ZmMzQ2ZTQxZmIxYjQ3NzdiZjY3YTFhNjYxMGQ4YjQ0Y2MZpOQ=: 00:10:48.160 13:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:48.160 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:48.160 13:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:10:48.160 13:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.160 13:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:48.160 13:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.160 13:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:48.160 13:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:48.160 13:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:10:48.160 13:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:10:48.728 13:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:10:48.728 13:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:48.728 13:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:48.728 13:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:48.728 13:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:48.728 13:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:48.728 13:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:48.728 13:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.728 13:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:48.728 13:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.728 13:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:48.728 13:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:48.728 13:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:49.028 00:10:49.028 13:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:49.028 13:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:49.028 13:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:49.287 13:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:49.288 13:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:49.288 13:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.288 13:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:49.288 13:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.288 13:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:49.288 { 00:10:49.288 "cntlid": 73, 00:10:49.288 "qid": 0, 00:10:49.288 "state": "enabled", 00:10:49.288 "thread": "nvmf_tgt_poll_group_000", 00:10:49.288 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4", 00:10:49.288 "listen_address": { 00:10:49.288 "trtype": "TCP", 00:10:49.288 "adrfam": "IPv4", 00:10:49.288 "traddr": "10.0.0.3", 00:10:49.288 "trsvcid": "4420" 00:10:49.288 }, 00:10:49.288 "peer_address": { 00:10:49.288 "trtype": "TCP", 00:10:49.288 "adrfam": "IPv4", 00:10:49.288 "traddr": "10.0.0.1", 00:10:49.288 "trsvcid": "40506" 00:10:49.288 }, 00:10:49.288 "auth": { 00:10:49.288 "state": "completed", 00:10:49.288 "digest": "sha384", 00:10:49.288 "dhgroup": "ffdhe4096" 00:10:49.288 } 00:10:49.288 } 00:10:49.288 ]' 00:10:49.288 13:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:49.288 13:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:49.288 13:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:49.288 13:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:49.288 13:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:49.288 13:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:49.288 13:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:49.288 13:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:49.855 13:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2FmN2U1MTUzMjliYzBmZTU3M2ZjNjcwMDhjODg0OTdkMjI3NDkzZGE5NmFiN2EyQFmPpA==: --dhchap-ctrl-secret DHHC-1:03:Njg1NWJiYmZhMjkxMzRjNWM1NDE2OGNlYjc4YjgzMzRiMDVkZmJhMWQ0OGZlMjAyZjNhZDExNWY1YWQzNWE3Ntyf5n4=: 00:10:49.855 13:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --hostid cfa2def7-c8af-457f-82a0-b312efdea7f4 -l 0 --dhchap-secret DHHC-1:00:N2FmN2U1MTUzMjliYzBmZTU3M2ZjNjcwMDhjODg0OTdkMjI3NDkzZGE5NmFiN2EyQFmPpA==: --dhchap-ctrl-secret DHHC-1:03:Njg1NWJiYmZhMjkxMzRjNWM1NDE2OGNlYjc4YjgzMzRiMDVkZmJhMWQ0OGZlMjAyZjNhZDExNWY1YWQzNWE3Ntyf5n4=: 00:10:50.433 13:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:50.433 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:50.433 13:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:10:50.433 13:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.433 13:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.433 13:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.433 13:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:50.433 13:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:10:50.433 13:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:10:50.707 13:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:10:50.707 13:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:50.707 13:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:50.707 13:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:50.707 13:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:50.707 13:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:50.707 13:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:50.707 13:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.707 13:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.707 13:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.707 13:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:50.707 13:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:50.707 13:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:50.965 00:10:50.965 13:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:50.965 13:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:50.965 13:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:51.225 13:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:51.225 13:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:51.225 13:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.225 13:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:51.225 13:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.225 13:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:51.225 { 00:10:51.225 "cntlid": 75, 00:10:51.225 "qid": 0, 00:10:51.225 "state": "enabled", 00:10:51.225 "thread": "nvmf_tgt_poll_group_000", 00:10:51.225 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4", 00:10:51.225 "listen_address": { 00:10:51.225 "trtype": "TCP", 00:10:51.225 "adrfam": "IPv4", 00:10:51.225 "traddr": "10.0.0.3", 00:10:51.225 "trsvcid": "4420" 00:10:51.225 }, 00:10:51.225 "peer_address": { 00:10:51.225 "trtype": "TCP", 00:10:51.225 "adrfam": "IPv4", 00:10:51.225 "traddr": "10.0.0.1", 00:10:51.225 "trsvcid": "60486" 00:10:51.225 }, 00:10:51.225 "auth": { 00:10:51.225 "state": "completed", 00:10:51.225 "digest": "sha384", 00:10:51.225 "dhgroup": "ffdhe4096" 00:10:51.225 } 00:10:51.225 } 00:10:51.225 ]' 00:10:51.225 13:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:51.225 13:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:51.225 13:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:51.485 13:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:51.485 13:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:51.485 13:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:51.485 13:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:51.485 13:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:51.745 13:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjQ2ZDlmZjJmZTk0NGE4YmY4YzZmOTMxYzlmMmI0Y2aDcmP9: --dhchap-ctrl-secret DHHC-1:02:MmQxNTRiNGRjZDkwOWI5YmQ4MmZjZjhiMWUwOTc2ZjE0ZmEwMzc0NTgyZGVkYzdiP7GVWQ==: 00:10:51.745 13:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --hostid cfa2def7-c8af-457f-82a0-b312efdea7f4 -l 0 --dhchap-secret DHHC-1:01:ZjQ2ZDlmZjJmZTk0NGE4YmY4YzZmOTMxYzlmMmI0Y2aDcmP9: --dhchap-ctrl-secret DHHC-1:02:MmQxNTRiNGRjZDkwOWI5YmQ4MmZjZjhiMWUwOTc2ZjE0ZmEwMzc0NTgyZGVkYzdiP7GVWQ==: 00:10:52.314 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:52.314 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:52.314 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:10:52.314 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.314 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:52.314 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.314 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:52.314 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:10:52.314 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:10:52.574 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:10:52.574 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:52.574 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:52.574 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:52.574 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:52.574 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:52.574 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:52.574 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.574 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:52.574 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.574 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:52.574 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:52.574 13:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:52.833 00:10:52.833 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:52.833 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:52.833 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:53.402 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:53.402 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:53.402 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.402 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:53.402 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.402 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:53.402 { 00:10:53.402 "cntlid": 77, 00:10:53.402 "qid": 0, 00:10:53.402 "state": "enabled", 00:10:53.402 "thread": "nvmf_tgt_poll_group_000", 00:10:53.402 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4", 00:10:53.402 "listen_address": { 00:10:53.402 "trtype": "TCP", 00:10:53.402 "adrfam": "IPv4", 00:10:53.402 "traddr": "10.0.0.3", 00:10:53.402 "trsvcid": "4420" 00:10:53.402 }, 00:10:53.402 "peer_address": { 00:10:53.402 "trtype": "TCP", 00:10:53.402 "adrfam": "IPv4", 00:10:53.402 "traddr": "10.0.0.1", 00:10:53.403 "trsvcid": "60512" 00:10:53.403 }, 00:10:53.403 "auth": { 00:10:53.403 "state": "completed", 00:10:53.403 "digest": "sha384", 00:10:53.403 "dhgroup": "ffdhe4096" 00:10:53.403 } 00:10:53.403 } 00:10:53.403 ]' 00:10:53.403 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:53.403 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:53.403 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:53.403 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:53.403 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:53.403 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:53.403 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:53.403 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:53.662 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDY0MDZkOGNlMjc4MWQ4ZjYyNTAyOTNjNWUzNGExOTQ2NjI0OGE3ZjZlNGQxOWM2YULh6g==: --dhchap-ctrl-secret DHHC-1:01:NzY5OTJjM2UzNDQ0NjQ2NzEyYjk2ZGExZTlkZTZkM2NLb/z4: 00:10:53.662 13:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --hostid cfa2def7-c8af-457f-82a0-b312efdea7f4 -l 0 --dhchap-secret DHHC-1:02:ZDY0MDZkOGNlMjc4MWQ4ZjYyNTAyOTNjNWUzNGExOTQ2NjI0OGE3ZjZlNGQxOWM2YULh6g==: --dhchap-ctrl-secret DHHC-1:01:NzY5OTJjM2UzNDQ0NjQ2NzEyYjk2ZGExZTlkZTZkM2NLb/z4: 00:10:54.231 13:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:54.231 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:54.231 13:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:10:54.231 13:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.231 13:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.231 13:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.231 13:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:54.231 13:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:10:54.231 13:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:10:54.491 13:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:10:54.491 13:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:54.491 13:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:54.491 13:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:54.491 13:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:54.491 13:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:54.491 13:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --dhchap-key key3 00:10:54.491 13:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.491 13:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.491 13:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.491 13:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:54.491 13:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:54.491 13:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:55.057 00:10:55.057 13:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:55.057 13:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:55.057 13:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:55.316 13:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:55.316 13:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:55.316 13:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.316 13:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:55.316 13:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.316 13:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:55.316 { 00:10:55.316 "cntlid": 79, 00:10:55.316 "qid": 0, 00:10:55.316 "state": "enabled", 00:10:55.316 "thread": "nvmf_tgt_poll_group_000", 00:10:55.316 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4", 00:10:55.316 "listen_address": { 00:10:55.316 "trtype": "TCP", 00:10:55.316 "adrfam": "IPv4", 00:10:55.316 "traddr": "10.0.0.3", 00:10:55.316 "trsvcid": "4420" 00:10:55.316 }, 00:10:55.316 "peer_address": { 00:10:55.316 "trtype": "TCP", 00:10:55.316 "adrfam": "IPv4", 00:10:55.316 "traddr": "10.0.0.1", 00:10:55.316 "trsvcid": "60534" 00:10:55.316 }, 00:10:55.316 "auth": { 00:10:55.316 "state": "completed", 00:10:55.316 "digest": "sha384", 00:10:55.316 "dhgroup": "ffdhe4096" 00:10:55.316 } 00:10:55.316 } 00:10:55.316 ]' 00:10:55.316 13:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:55.316 13:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:55.316 13:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:55.316 13:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:55.316 13:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:55.316 13:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:55.316 13:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:55.316 13:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:55.575 13:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2ZkZjI0Njc3NWQ4YTk5NDkyZGZlZDM1NjIzMDBkY2ZmMzQ2ZTQxZmIxYjQ3NzdiZjY3YTFhNjYxMGQ4YjQ0Y2MZpOQ=: 00:10:55.575 13:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --hostid cfa2def7-c8af-457f-82a0-b312efdea7f4 -l 0 --dhchap-secret DHHC-1:03:M2ZkZjI0Njc3NWQ4YTk5NDkyZGZlZDM1NjIzMDBkY2ZmMzQ2ZTQxZmIxYjQ3NzdiZjY3YTFhNjYxMGQ4YjQ0Y2MZpOQ=: 00:10:56.142 13:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:56.142 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:56.142 13:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:10:56.142 13:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.142 13:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.143 13:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.143 13:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:56.143 13:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:56.143 13:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:10:56.143 13:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:10:56.399 13:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:10:56.399 13:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:56.399 13:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:56.399 13:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:56.399 13:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:56.399 13:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:56.399 13:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:56.399 13:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.399 13:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.656 13:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.656 13:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:56.656 13:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:56.656 13:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:56.913 00:10:56.913 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:56.913 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:56.913 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:57.170 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:57.170 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:57.170 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.170 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.170 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.170 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:57.170 { 00:10:57.170 "cntlid": 81, 00:10:57.170 "qid": 0, 00:10:57.170 "state": "enabled", 00:10:57.170 "thread": "nvmf_tgt_poll_group_000", 00:10:57.170 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4", 00:10:57.170 "listen_address": { 00:10:57.170 "trtype": "TCP", 00:10:57.170 "adrfam": "IPv4", 00:10:57.170 "traddr": "10.0.0.3", 00:10:57.171 "trsvcid": "4420" 00:10:57.171 }, 00:10:57.171 "peer_address": { 00:10:57.171 "trtype": "TCP", 00:10:57.171 "adrfam": "IPv4", 00:10:57.171 "traddr": "10.0.0.1", 00:10:57.171 "trsvcid": "60562" 00:10:57.171 }, 00:10:57.171 "auth": { 00:10:57.171 "state": "completed", 00:10:57.171 "digest": "sha384", 00:10:57.171 "dhgroup": "ffdhe6144" 00:10:57.171 } 00:10:57.171 } 00:10:57.171 ]' 00:10:57.171 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:57.428 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:57.428 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:57.428 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:57.428 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:57.429 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:57.429 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:57.429 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:57.687 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2FmN2U1MTUzMjliYzBmZTU3M2ZjNjcwMDhjODg0OTdkMjI3NDkzZGE5NmFiN2EyQFmPpA==: --dhchap-ctrl-secret DHHC-1:03:Njg1NWJiYmZhMjkxMzRjNWM1NDE2OGNlYjc4YjgzMzRiMDVkZmJhMWQ0OGZlMjAyZjNhZDExNWY1YWQzNWE3Ntyf5n4=: 00:10:57.687 13:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --hostid cfa2def7-c8af-457f-82a0-b312efdea7f4 -l 0 --dhchap-secret DHHC-1:00:N2FmN2U1MTUzMjliYzBmZTU3M2ZjNjcwMDhjODg0OTdkMjI3NDkzZGE5NmFiN2EyQFmPpA==: --dhchap-ctrl-secret DHHC-1:03:Njg1NWJiYmZhMjkxMzRjNWM1NDE2OGNlYjc4YjgzMzRiMDVkZmJhMWQ0OGZlMjAyZjNhZDExNWY1YWQzNWE3Ntyf5n4=: 00:10:58.254 13:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:58.254 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:58.254 13:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:10:58.254 13:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.254 13:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.254 13:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.254 13:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:58.254 13:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:10:58.254 13:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:10:58.822 13:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:10:58.822 13:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:58.822 13:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:58.822 13:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:58.822 13:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:58.822 13:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:58.822 13:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:58.822 13:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.822 13:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.822 13:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.822 13:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:58.822 13:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:58.822 13:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:59.080 00:10:59.080 13:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:59.080 13:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:59.080 13:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:59.346 13:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:59.346 13:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:59.346 13:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.346 13:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:59.346 13:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.346 13:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:59.346 { 00:10:59.346 "cntlid": 83, 00:10:59.346 "qid": 0, 00:10:59.346 "state": "enabled", 00:10:59.346 "thread": "nvmf_tgt_poll_group_000", 00:10:59.346 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4", 00:10:59.346 "listen_address": { 00:10:59.346 "trtype": "TCP", 00:10:59.346 "adrfam": "IPv4", 00:10:59.346 "traddr": "10.0.0.3", 00:10:59.346 "trsvcid": "4420" 00:10:59.346 }, 00:10:59.346 "peer_address": { 00:10:59.346 "trtype": "TCP", 00:10:59.346 "adrfam": "IPv4", 00:10:59.346 "traddr": "10.0.0.1", 00:10:59.346 "trsvcid": "60590" 00:10:59.346 }, 00:10:59.346 "auth": { 00:10:59.346 "state": "completed", 00:10:59.346 "digest": "sha384", 00:10:59.346 "dhgroup": "ffdhe6144" 00:10:59.346 } 00:10:59.346 } 00:10:59.346 ]' 00:10:59.346 13:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:59.346 13:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:59.346 13:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:59.606 13:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:59.606 13:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:59.606 13:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:59.606 13:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:59.606 13:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:59.865 13:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjQ2ZDlmZjJmZTk0NGE4YmY4YzZmOTMxYzlmMmI0Y2aDcmP9: --dhchap-ctrl-secret DHHC-1:02:MmQxNTRiNGRjZDkwOWI5YmQ4MmZjZjhiMWUwOTc2ZjE0ZmEwMzc0NTgyZGVkYzdiP7GVWQ==: 00:10:59.865 13:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --hostid cfa2def7-c8af-457f-82a0-b312efdea7f4 -l 0 --dhchap-secret DHHC-1:01:ZjQ2ZDlmZjJmZTk0NGE4YmY4YzZmOTMxYzlmMmI0Y2aDcmP9: --dhchap-ctrl-secret DHHC-1:02:MmQxNTRiNGRjZDkwOWI5YmQ4MmZjZjhiMWUwOTc2ZjE0ZmEwMzc0NTgyZGVkYzdiP7GVWQ==: 00:11:00.433 13:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:00.433 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:00.433 13:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:11:00.433 13:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.433 13:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.433 13:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.433 13:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:00.433 13:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:00.433 13:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:00.694 13:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:11:00.694 13:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:00.694 13:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:00.694 13:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:00.694 13:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:00.694 13:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:00.694 13:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:00.694 13:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.694 13:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.694 13:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.694 13:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:00.694 13:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:00.694 13:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:01.261 00:11:01.261 13:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:01.261 13:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:01.261 13:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:01.520 13:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:01.520 13:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:01.520 13:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.520 13:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.520 13:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.520 13:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:01.520 { 00:11:01.520 "cntlid": 85, 00:11:01.520 "qid": 0, 00:11:01.520 "state": "enabled", 00:11:01.520 "thread": "nvmf_tgt_poll_group_000", 00:11:01.520 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4", 00:11:01.520 "listen_address": { 00:11:01.520 "trtype": "TCP", 00:11:01.520 "adrfam": "IPv4", 00:11:01.520 "traddr": "10.0.0.3", 00:11:01.520 "trsvcid": "4420" 00:11:01.520 }, 00:11:01.520 "peer_address": { 00:11:01.520 "trtype": "TCP", 00:11:01.520 "adrfam": "IPv4", 00:11:01.520 "traddr": "10.0.0.1", 00:11:01.520 "trsvcid": "59750" 00:11:01.520 }, 00:11:01.520 "auth": { 00:11:01.520 "state": "completed", 00:11:01.520 "digest": "sha384", 00:11:01.520 "dhgroup": "ffdhe6144" 00:11:01.520 } 00:11:01.520 } 00:11:01.520 ]' 00:11:01.520 13:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:01.520 13:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:01.520 13:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:01.520 13:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:01.520 13:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:01.520 13:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:01.520 13:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:01.520 13:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:02.088 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDY0MDZkOGNlMjc4MWQ4ZjYyNTAyOTNjNWUzNGExOTQ2NjI0OGE3ZjZlNGQxOWM2YULh6g==: --dhchap-ctrl-secret DHHC-1:01:NzY5OTJjM2UzNDQ0NjQ2NzEyYjk2ZGExZTlkZTZkM2NLb/z4: 00:11:02.088 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --hostid cfa2def7-c8af-457f-82a0-b312efdea7f4 -l 0 --dhchap-secret DHHC-1:02:ZDY0MDZkOGNlMjc4MWQ4ZjYyNTAyOTNjNWUzNGExOTQ2NjI0OGE3ZjZlNGQxOWM2YULh6g==: --dhchap-ctrl-secret DHHC-1:01:NzY5OTJjM2UzNDQ0NjQ2NzEyYjk2ZGExZTlkZTZkM2NLb/z4: 00:11:02.347 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:02.347 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:02.347 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:11:02.347 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.347 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.347 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.347 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:02.347 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:02.347 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:02.607 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:11:02.607 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:02.607 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:02.607 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:02.607 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:02.607 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:02.607 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --dhchap-key key3 00:11:02.607 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.607 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.607 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.607 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:02.607 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:02.607 13:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:03.176 00:11:03.176 13:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:03.176 13:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:03.176 13:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:03.435 13:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:03.435 13:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:03.435 13:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.435 13:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.435 13:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.435 13:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:03.435 { 00:11:03.435 "cntlid": 87, 00:11:03.435 "qid": 0, 00:11:03.435 "state": "enabled", 00:11:03.435 "thread": "nvmf_tgt_poll_group_000", 00:11:03.435 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4", 00:11:03.435 "listen_address": { 00:11:03.435 "trtype": "TCP", 00:11:03.435 "adrfam": "IPv4", 00:11:03.435 "traddr": "10.0.0.3", 00:11:03.435 "trsvcid": "4420" 00:11:03.435 }, 00:11:03.435 "peer_address": { 00:11:03.435 "trtype": "TCP", 00:11:03.435 "adrfam": "IPv4", 00:11:03.435 "traddr": "10.0.0.1", 00:11:03.435 "trsvcid": "59790" 00:11:03.435 }, 00:11:03.435 "auth": { 00:11:03.435 "state": "completed", 00:11:03.435 "digest": "sha384", 00:11:03.435 "dhgroup": "ffdhe6144" 00:11:03.435 } 00:11:03.435 } 00:11:03.435 ]' 00:11:03.435 13:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:03.435 13:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:03.435 13:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:03.435 13:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:03.435 13:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:03.435 13:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:03.435 13:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:03.435 13:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:03.694 13:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2ZkZjI0Njc3NWQ4YTk5NDkyZGZlZDM1NjIzMDBkY2ZmMzQ2ZTQxZmIxYjQ3NzdiZjY3YTFhNjYxMGQ4YjQ0Y2MZpOQ=: 00:11:03.695 13:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --hostid cfa2def7-c8af-457f-82a0-b312efdea7f4 -l 0 --dhchap-secret DHHC-1:03:M2ZkZjI0Njc3NWQ4YTk5NDkyZGZlZDM1NjIzMDBkY2ZmMzQ2ZTQxZmIxYjQ3NzdiZjY3YTFhNjYxMGQ4YjQ0Y2MZpOQ=: 00:11:04.263 13:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:04.263 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:04.263 13:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:11:04.263 13:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.263 13:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.263 13:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.263 13:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:04.263 13:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:04.263 13:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:04.263 13:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:04.522 13:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:11:04.522 13:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:04.523 13:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:04.523 13:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:04.523 13:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:04.523 13:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:04.523 13:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:04.523 13:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.523 13:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.523 13:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.523 13:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:04.523 13:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:04.523 13:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:05.104 00:11:05.104 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:05.104 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:05.104 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:05.383 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:05.383 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:05.383 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.383 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.383 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.383 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:05.383 { 00:11:05.383 "cntlid": 89, 00:11:05.383 "qid": 0, 00:11:05.383 "state": "enabled", 00:11:05.383 "thread": "nvmf_tgt_poll_group_000", 00:11:05.383 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4", 00:11:05.383 "listen_address": { 00:11:05.383 "trtype": "TCP", 00:11:05.383 "adrfam": "IPv4", 00:11:05.383 "traddr": "10.0.0.3", 00:11:05.383 "trsvcid": "4420" 00:11:05.383 }, 00:11:05.383 "peer_address": { 00:11:05.383 "trtype": "TCP", 00:11:05.383 "adrfam": "IPv4", 00:11:05.383 "traddr": "10.0.0.1", 00:11:05.383 "trsvcid": "59820" 00:11:05.383 }, 00:11:05.383 "auth": { 00:11:05.383 "state": "completed", 00:11:05.383 "digest": "sha384", 00:11:05.383 "dhgroup": "ffdhe8192" 00:11:05.383 } 00:11:05.383 } 00:11:05.383 ]' 00:11:05.383 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:05.383 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:05.383 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:05.383 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:05.383 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:05.642 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:05.642 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:05.642 13:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:05.913 13:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2FmN2U1MTUzMjliYzBmZTU3M2ZjNjcwMDhjODg0OTdkMjI3NDkzZGE5NmFiN2EyQFmPpA==: --dhchap-ctrl-secret DHHC-1:03:Njg1NWJiYmZhMjkxMzRjNWM1NDE2OGNlYjc4YjgzMzRiMDVkZmJhMWQ0OGZlMjAyZjNhZDExNWY1YWQzNWE3Ntyf5n4=: 00:11:05.913 13:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --hostid cfa2def7-c8af-457f-82a0-b312efdea7f4 -l 0 --dhchap-secret DHHC-1:00:N2FmN2U1MTUzMjliYzBmZTU3M2ZjNjcwMDhjODg0OTdkMjI3NDkzZGE5NmFiN2EyQFmPpA==: --dhchap-ctrl-secret DHHC-1:03:Njg1NWJiYmZhMjkxMzRjNWM1NDE2OGNlYjc4YjgzMzRiMDVkZmJhMWQ0OGZlMjAyZjNhZDExNWY1YWQzNWE3Ntyf5n4=: 00:11:06.481 13:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:06.481 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:06.481 13:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:11:06.481 13:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.481 13:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.481 13:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.481 13:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:06.481 13:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:06.481 13:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:06.740 13:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:11:06.740 13:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:06.740 13:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:06.740 13:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:06.740 13:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:06.740 13:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:06.740 13:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:06.740 13:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.740 13:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.740 13:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.740 13:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:06.740 13:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:06.740 13:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:07.308 00:11:07.308 13:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:07.308 13:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:07.308 13:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:07.567 13:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:07.567 13:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:07.567 13:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.567 13:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.567 13:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.567 13:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:07.567 { 00:11:07.567 "cntlid": 91, 00:11:07.567 "qid": 0, 00:11:07.567 "state": "enabled", 00:11:07.567 "thread": "nvmf_tgt_poll_group_000", 00:11:07.567 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4", 00:11:07.567 "listen_address": { 00:11:07.567 "trtype": "TCP", 00:11:07.567 "adrfam": "IPv4", 00:11:07.567 "traddr": "10.0.0.3", 00:11:07.567 "trsvcid": "4420" 00:11:07.567 }, 00:11:07.567 "peer_address": { 00:11:07.567 "trtype": "TCP", 00:11:07.567 "adrfam": "IPv4", 00:11:07.567 "traddr": "10.0.0.1", 00:11:07.567 "trsvcid": "59850" 00:11:07.567 }, 00:11:07.567 "auth": { 00:11:07.567 "state": "completed", 00:11:07.567 "digest": "sha384", 00:11:07.567 "dhgroup": "ffdhe8192" 00:11:07.567 } 00:11:07.567 } 00:11:07.567 ]' 00:11:07.567 13:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:07.825 13:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:07.825 13:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:07.825 13:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:07.826 13:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:07.826 13:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:07.826 13:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:07.826 13:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:08.084 13:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjQ2ZDlmZjJmZTk0NGE4YmY4YzZmOTMxYzlmMmI0Y2aDcmP9: --dhchap-ctrl-secret DHHC-1:02:MmQxNTRiNGRjZDkwOWI5YmQ4MmZjZjhiMWUwOTc2ZjE0ZmEwMzc0NTgyZGVkYzdiP7GVWQ==: 00:11:08.084 13:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --hostid cfa2def7-c8af-457f-82a0-b312efdea7f4 -l 0 --dhchap-secret DHHC-1:01:ZjQ2ZDlmZjJmZTk0NGE4YmY4YzZmOTMxYzlmMmI0Y2aDcmP9: --dhchap-ctrl-secret DHHC-1:02:MmQxNTRiNGRjZDkwOWI5YmQ4MmZjZjhiMWUwOTc2ZjE0ZmEwMzc0NTgyZGVkYzdiP7GVWQ==: 00:11:09.022 13:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:09.022 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:09.022 13:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:11:09.022 13:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.022 13:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.022 13:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.022 13:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:09.022 13:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:09.022 13:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:09.022 13:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:11:09.022 13:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:09.022 13:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:09.022 13:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:09.022 13:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:09.022 13:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:09.022 13:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:09.022 13:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.022 13:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.022 13:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.022 13:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:09.022 13:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:09.022 13:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:09.589 00:11:09.589 13:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:09.589 13:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:09.589 13:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:10.154 13:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:10.154 13:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:10.154 13:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.154 13:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.154 13:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.154 13:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:10.154 { 00:11:10.154 "cntlid": 93, 00:11:10.154 "qid": 0, 00:11:10.154 "state": "enabled", 00:11:10.154 "thread": "nvmf_tgt_poll_group_000", 00:11:10.154 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4", 00:11:10.154 "listen_address": { 00:11:10.154 "trtype": "TCP", 00:11:10.154 "adrfam": "IPv4", 00:11:10.154 "traddr": "10.0.0.3", 00:11:10.154 "trsvcid": "4420" 00:11:10.154 }, 00:11:10.154 "peer_address": { 00:11:10.154 "trtype": "TCP", 00:11:10.154 "adrfam": "IPv4", 00:11:10.154 "traddr": "10.0.0.1", 00:11:10.154 "trsvcid": "59868" 00:11:10.154 }, 00:11:10.154 "auth": { 00:11:10.154 "state": "completed", 00:11:10.154 "digest": "sha384", 00:11:10.154 "dhgroup": "ffdhe8192" 00:11:10.154 } 00:11:10.154 } 00:11:10.154 ]' 00:11:10.154 13:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:10.154 13:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:10.154 13:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:10.154 13:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:10.154 13:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:10.154 13:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:10.155 13:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:10.155 13:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:10.413 13:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDY0MDZkOGNlMjc4MWQ4ZjYyNTAyOTNjNWUzNGExOTQ2NjI0OGE3ZjZlNGQxOWM2YULh6g==: --dhchap-ctrl-secret DHHC-1:01:NzY5OTJjM2UzNDQ0NjQ2NzEyYjk2ZGExZTlkZTZkM2NLb/z4: 00:11:10.413 13:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --hostid cfa2def7-c8af-457f-82a0-b312efdea7f4 -l 0 --dhchap-secret DHHC-1:02:ZDY0MDZkOGNlMjc4MWQ4ZjYyNTAyOTNjNWUzNGExOTQ2NjI0OGE3ZjZlNGQxOWM2YULh6g==: --dhchap-ctrl-secret DHHC-1:01:NzY5OTJjM2UzNDQ0NjQ2NzEyYjk2ZGExZTlkZTZkM2NLb/z4: 00:11:10.980 13:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:10.980 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:10.980 13:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:11:10.980 13:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.980 13:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.980 13:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.980 13:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:10.980 13:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:10.980 13:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:11.237 13:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:11:11.237 13:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:11.237 13:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:11.237 13:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:11.237 13:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:11.237 13:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:11.237 13:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --dhchap-key key3 00:11:11.237 13:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.237 13:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.237 13:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.237 13:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:11.237 13:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:11.237 13:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:11.802 00:11:11.802 13:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:11.802 13:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:11.802 13:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:12.061 13:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:12.061 13:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:12.061 13:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.061 13:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.061 13:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.061 13:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:12.061 { 00:11:12.061 "cntlid": 95, 00:11:12.061 "qid": 0, 00:11:12.061 "state": "enabled", 00:11:12.061 "thread": "nvmf_tgt_poll_group_000", 00:11:12.061 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4", 00:11:12.061 "listen_address": { 00:11:12.061 "trtype": "TCP", 00:11:12.061 "adrfam": "IPv4", 00:11:12.061 "traddr": "10.0.0.3", 00:11:12.061 "trsvcid": "4420" 00:11:12.061 }, 00:11:12.061 "peer_address": { 00:11:12.061 "trtype": "TCP", 00:11:12.061 "adrfam": "IPv4", 00:11:12.061 "traddr": "10.0.0.1", 00:11:12.061 "trsvcid": "54002" 00:11:12.061 }, 00:11:12.061 "auth": { 00:11:12.061 "state": "completed", 00:11:12.061 "digest": "sha384", 00:11:12.061 "dhgroup": "ffdhe8192" 00:11:12.061 } 00:11:12.061 } 00:11:12.061 ]' 00:11:12.061 13:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:12.319 13:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:12.319 13:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:12.319 13:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:12.319 13:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:12.319 13:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:12.319 13:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:12.319 13:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:12.577 13:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2ZkZjI0Njc3NWQ4YTk5NDkyZGZlZDM1NjIzMDBkY2ZmMzQ2ZTQxZmIxYjQ3NzdiZjY3YTFhNjYxMGQ4YjQ0Y2MZpOQ=: 00:11:12.577 13:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --hostid cfa2def7-c8af-457f-82a0-b312efdea7f4 -l 0 --dhchap-secret DHHC-1:03:M2ZkZjI0Njc3NWQ4YTk5NDkyZGZlZDM1NjIzMDBkY2ZmMzQ2ZTQxZmIxYjQ3NzdiZjY3YTFhNjYxMGQ4YjQ0Y2MZpOQ=: 00:11:13.144 13:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:13.144 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:13.144 13:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:11:13.144 13:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.144 13:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.144 13:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.144 13:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:11:13.144 13:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:13.144 13:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:13.144 13:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:13.144 13:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:13.402 13:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:11:13.402 13:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:13.402 13:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:13.402 13:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:13.402 13:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:13.402 13:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:13.402 13:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:13.402 13:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.402 13:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.402 13:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.402 13:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:13.402 13:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:13.402 13:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:13.660 00:11:13.660 13:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:13.660 13:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:13.660 13:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:14.228 13:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:14.228 13:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:14.228 13:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.228 13:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.228 13:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.228 13:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:14.228 { 00:11:14.228 "cntlid": 97, 00:11:14.228 "qid": 0, 00:11:14.228 "state": "enabled", 00:11:14.228 "thread": "nvmf_tgt_poll_group_000", 00:11:14.228 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4", 00:11:14.228 "listen_address": { 00:11:14.228 "trtype": "TCP", 00:11:14.228 "adrfam": "IPv4", 00:11:14.228 "traddr": "10.0.0.3", 00:11:14.228 "trsvcid": "4420" 00:11:14.228 }, 00:11:14.228 "peer_address": { 00:11:14.228 "trtype": "TCP", 00:11:14.228 "adrfam": "IPv4", 00:11:14.228 "traddr": "10.0.0.1", 00:11:14.228 "trsvcid": "54036" 00:11:14.228 }, 00:11:14.228 "auth": { 00:11:14.228 "state": "completed", 00:11:14.228 "digest": "sha512", 00:11:14.228 "dhgroup": "null" 00:11:14.228 } 00:11:14.228 } 00:11:14.228 ]' 00:11:14.228 13:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:14.228 13:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:14.228 13:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:14.228 13:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:14.228 13:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:14.228 13:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:14.228 13:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:14.228 13:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:14.486 13:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2FmN2U1MTUzMjliYzBmZTU3M2ZjNjcwMDhjODg0OTdkMjI3NDkzZGE5NmFiN2EyQFmPpA==: --dhchap-ctrl-secret DHHC-1:03:Njg1NWJiYmZhMjkxMzRjNWM1NDE2OGNlYjc4YjgzMzRiMDVkZmJhMWQ0OGZlMjAyZjNhZDExNWY1YWQzNWE3Ntyf5n4=: 00:11:14.486 13:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --hostid cfa2def7-c8af-457f-82a0-b312efdea7f4 -l 0 --dhchap-secret DHHC-1:00:N2FmN2U1MTUzMjliYzBmZTU3M2ZjNjcwMDhjODg0OTdkMjI3NDkzZGE5NmFiN2EyQFmPpA==: --dhchap-ctrl-secret DHHC-1:03:Njg1NWJiYmZhMjkxMzRjNWM1NDE2OGNlYjc4YjgzMzRiMDVkZmJhMWQ0OGZlMjAyZjNhZDExNWY1YWQzNWE3Ntyf5n4=: 00:11:15.053 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:15.053 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:15.053 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:11:15.053 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.054 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.054 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.054 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:15.054 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:15.054 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:15.313 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:11:15.313 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:15.313 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:15.313 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:15.313 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:15.313 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:15.313 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:15.313 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.313 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.313 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.313 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:15.313 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:15.313 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:15.571 00:11:15.571 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:15.571 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:15.571 13:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:16.145 13:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:16.145 13:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:16.145 13:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.145 13:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.145 13:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.145 13:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:16.145 { 00:11:16.145 "cntlid": 99, 00:11:16.145 "qid": 0, 00:11:16.145 "state": "enabled", 00:11:16.145 "thread": "nvmf_tgt_poll_group_000", 00:11:16.145 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4", 00:11:16.145 "listen_address": { 00:11:16.145 "trtype": "TCP", 00:11:16.145 "adrfam": "IPv4", 00:11:16.145 "traddr": "10.0.0.3", 00:11:16.145 "trsvcid": "4420" 00:11:16.145 }, 00:11:16.145 "peer_address": { 00:11:16.145 "trtype": "TCP", 00:11:16.145 "adrfam": "IPv4", 00:11:16.145 "traddr": "10.0.0.1", 00:11:16.145 "trsvcid": "54052" 00:11:16.145 }, 00:11:16.145 "auth": { 00:11:16.145 "state": "completed", 00:11:16.145 "digest": "sha512", 00:11:16.145 "dhgroup": "null" 00:11:16.145 } 00:11:16.145 } 00:11:16.145 ]' 00:11:16.145 13:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:16.145 13:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:16.145 13:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:16.145 13:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:16.145 13:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:16.145 13:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:16.145 13:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:16.145 13:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:16.403 13:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjQ2ZDlmZjJmZTk0NGE4YmY4YzZmOTMxYzlmMmI0Y2aDcmP9: --dhchap-ctrl-secret DHHC-1:02:MmQxNTRiNGRjZDkwOWI5YmQ4MmZjZjhiMWUwOTc2ZjE0ZmEwMzc0NTgyZGVkYzdiP7GVWQ==: 00:11:16.403 13:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --hostid cfa2def7-c8af-457f-82a0-b312efdea7f4 -l 0 --dhchap-secret DHHC-1:01:ZjQ2ZDlmZjJmZTk0NGE4YmY4YzZmOTMxYzlmMmI0Y2aDcmP9: --dhchap-ctrl-secret DHHC-1:02:MmQxNTRiNGRjZDkwOWI5YmQ4MmZjZjhiMWUwOTc2ZjE0ZmEwMzc0NTgyZGVkYzdiP7GVWQ==: 00:11:16.970 13:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:16.970 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:16.970 13:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:11:16.970 13:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.970 13:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.970 13:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.970 13:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:16.970 13:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:16.970 13:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:17.229 13:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:11:17.229 13:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:17.229 13:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:17.229 13:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:17.229 13:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:17.229 13:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:17.229 13:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:17.229 13:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.229 13:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.229 13:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.229 13:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:17.229 13:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:17.229 13:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:17.488 00:11:17.488 13:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:17.488 13:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:17.488 13:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:17.747 13:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:17.747 13:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:17.747 13:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.747 13:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.747 13:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.747 13:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:17.747 { 00:11:17.747 "cntlid": 101, 00:11:17.747 "qid": 0, 00:11:17.747 "state": "enabled", 00:11:17.747 "thread": "nvmf_tgt_poll_group_000", 00:11:17.747 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4", 00:11:17.747 "listen_address": { 00:11:17.747 "trtype": "TCP", 00:11:17.747 "adrfam": "IPv4", 00:11:17.747 "traddr": "10.0.0.3", 00:11:17.747 "trsvcid": "4420" 00:11:17.747 }, 00:11:17.747 "peer_address": { 00:11:17.747 "trtype": "TCP", 00:11:17.747 "adrfam": "IPv4", 00:11:17.747 "traddr": "10.0.0.1", 00:11:17.747 "trsvcid": "54068" 00:11:17.747 }, 00:11:17.747 "auth": { 00:11:17.747 "state": "completed", 00:11:17.747 "digest": "sha512", 00:11:17.747 "dhgroup": "null" 00:11:17.747 } 00:11:17.747 } 00:11:17.747 ]' 00:11:17.747 13:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:18.065 13:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:18.065 13:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:18.065 13:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:18.065 13:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:18.065 13:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:18.065 13:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:18.065 13:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:18.358 13:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDY0MDZkOGNlMjc4MWQ4ZjYyNTAyOTNjNWUzNGExOTQ2NjI0OGE3ZjZlNGQxOWM2YULh6g==: --dhchap-ctrl-secret DHHC-1:01:NzY5OTJjM2UzNDQ0NjQ2NzEyYjk2ZGExZTlkZTZkM2NLb/z4: 00:11:18.358 13:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --hostid cfa2def7-c8af-457f-82a0-b312efdea7f4 -l 0 --dhchap-secret DHHC-1:02:ZDY0MDZkOGNlMjc4MWQ4ZjYyNTAyOTNjNWUzNGExOTQ2NjI0OGE3ZjZlNGQxOWM2YULh6g==: --dhchap-ctrl-secret DHHC-1:01:NzY5OTJjM2UzNDQ0NjQ2NzEyYjk2ZGExZTlkZTZkM2NLb/z4: 00:11:18.926 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:18.926 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:18.926 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:11:18.926 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.926 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.926 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.926 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:18.926 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:18.926 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:19.185 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:11:19.185 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:19.185 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:19.185 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:19.185 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:19.185 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:19.185 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --dhchap-key key3 00:11:19.185 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.185 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.185 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.185 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:19.185 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:19.185 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:19.445 00:11:19.445 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:19.445 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:19.445 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:19.703 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:19.703 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:19.703 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.703 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.703 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.703 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:19.703 { 00:11:19.703 "cntlid": 103, 00:11:19.703 "qid": 0, 00:11:19.703 "state": "enabled", 00:11:19.703 "thread": "nvmf_tgt_poll_group_000", 00:11:19.703 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4", 00:11:19.703 "listen_address": { 00:11:19.703 "trtype": "TCP", 00:11:19.703 "adrfam": "IPv4", 00:11:19.703 "traddr": "10.0.0.3", 00:11:19.703 "trsvcid": "4420" 00:11:19.703 }, 00:11:19.703 "peer_address": { 00:11:19.703 "trtype": "TCP", 00:11:19.703 "adrfam": "IPv4", 00:11:19.703 "traddr": "10.0.0.1", 00:11:19.703 "trsvcid": "54092" 00:11:19.703 }, 00:11:19.703 "auth": { 00:11:19.703 "state": "completed", 00:11:19.703 "digest": "sha512", 00:11:19.703 "dhgroup": "null" 00:11:19.703 } 00:11:19.703 } 00:11:19.703 ]' 00:11:19.703 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:19.703 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:19.703 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:19.703 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:19.703 13:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:19.703 13:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:19.703 13:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:19.703 13:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:19.961 13:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2ZkZjI0Njc3NWQ4YTk5NDkyZGZlZDM1NjIzMDBkY2ZmMzQ2ZTQxZmIxYjQ3NzdiZjY3YTFhNjYxMGQ4YjQ0Y2MZpOQ=: 00:11:19.961 13:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --hostid cfa2def7-c8af-457f-82a0-b312efdea7f4 -l 0 --dhchap-secret DHHC-1:03:M2ZkZjI0Njc3NWQ4YTk5NDkyZGZlZDM1NjIzMDBkY2ZmMzQ2ZTQxZmIxYjQ3NzdiZjY3YTFhNjYxMGQ4YjQ0Y2MZpOQ=: 00:11:20.897 13:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:20.897 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:20.897 13:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:11:20.897 13:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.897 13:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.897 13:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.897 13:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:20.897 13:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:20.897 13:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:20.897 13:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:20.897 13:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:11:20.897 13:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:20.897 13:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:20.897 13:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:20.897 13:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:20.897 13:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:20.898 13:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:20.898 13:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.898 13:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.156 13:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.156 13:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:21.156 13:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:21.156 13:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:21.414 00:11:21.414 13:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:21.414 13:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:21.414 13:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:21.672 13:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:21.672 13:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:21.672 13:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.672 13:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.672 13:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.672 13:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:21.672 { 00:11:21.672 "cntlid": 105, 00:11:21.672 "qid": 0, 00:11:21.672 "state": "enabled", 00:11:21.672 "thread": "nvmf_tgt_poll_group_000", 00:11:21.672 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4", 00:11:21.672 "listen_address": { 00:11:21.672 "trtype": "TCP", 00:11:21.672 "adrfam": "IPv4", 00:11:21.672 "traddr": "10.0.0.3", 00:11:21.672 "trsvcid": "4420" 00:11:21.672 }, 00:11:21.672 "peer_address": { 00:11:21.672 "trtype": "TCP", 00:11:21.672 "adrfam": "IPv4", 00:11:21.672 "traddr": "10.0.0.1", 00:11:21.672 "trsvcid": "58394" 00:11:21.672 }, 00:11:21.672 "auth": { 00:11:21.672 "state": "completed", 00:11:21.672 "digest": "sha512", 00:11:21.672 "dhgroup": "ffdhe2048" 00:11:21.672 } 00:11:21.672 } 00:11:21.672 ]' 00:11:21.672 13:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:21.672 13:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:21.672 13:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:21.672 13:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:21.672 13:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:21.931 13:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:21.931 13:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:21.931 13:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:22.189 13:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2FmN2U1MTUzMjliYzBmZTU3M2ZjNjcwMDhjODg0OTdkMjI3NDkzZGE5NmFiN2EyQFmPpA==: --dhchap-ctrl-secret DHHC-1:03:Njg1NWJiYmZhMjkxMzRjNWM1NDE2OGNlYjc4YjgzMzRiMDVkZmJhMWQ0OGZlMjAyZjNhZDExNWY1YWQzNWE3Ntyf5n4=: 00:11:22.189 13:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --hostid cfa2def7-c8af-457f-82a0-b312efdea7f4 -l 0 --dhchap-secret DHHC-1:00:N2FmN2U1MTUzMjliYzBmZTU3M2ZjNjcwMDhjODg0OTdkMjI3NDkzZGE5NmFiN2EyQFmPpA==: --dhchap-ctrl-secret DHHC-1:03:Njg1NWJiYmZhMjkxMzRjNWM1NDE2OGNlYjc4YjgzMzRiMDVkZmJhMWQ0OGZlMjAyZjNhZDExNWY1YWQzNWE3Ntyf5n4=: 00:11:22.755 13:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:22.755 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:22.755 13:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:11:22.755 13:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.755 13:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.755 13:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.755 13:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:22.755 13:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:22.756 13:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:23.014 13:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:11:23.014 13:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:23.014 13:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:23.014 13:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:23.014 13:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:23.014 13:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:23.014 13:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:23.014 13:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.014 13:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.014 13:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.014 13:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:23.014 13:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:23.014 13:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:23.272 00:11:23.272 13:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:23.272 13:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:23.273 13:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:23.531 13:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:23.531 13:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:23.531 13:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.531 13:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.531 13:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.531 13:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:23.531 { 00:11:23.531 "cntlid": 107, 00:11:23.531 "qid": 0, 00:11:23.531 "state": "enabled", 00:11:23.531 "thread": "nvmf_tgt_poll_group_000", 00:11:23.532 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4", 00:11:23.532 "listen_address": { 00:11:23.532 "trtype": "TCP", 00:11:23.532 "adrfam": "IPv4", 00:11:23.532 "traddr": "10.0.0.3", 00:11:23.532 "trsvcid": "4420" 00:11:23.532 }, 00:11:23.532 "peer_address": { 00:11:23.532 "trtype": "TCP", 00:11:23.532 "adrfam": "IPv4", 00:11:23.532 "traddr": "10.0.0.1", 00:11:23.532 "trsvcid": "58410" 00:11:23.532 }, 00:11:23.532 "auth": { 00:11:23.532 "state": "completed", 00:11:23.532 "digest": "sha512", 00:11:23.532 "dhgroup": "ffdhe2048" 00:11:23.532 } 00:11:23.532 } 00:11:23.532 ]' 00:11:23.532 13:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:23.532 13:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:23.532 13:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:23.789 13:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:23.789 13:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:23.789 13:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:23.789 13:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:23.789 13:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:24.046 13:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjQ2ZDlmZjJmZTk0NGE4YmY4YzZmOTMxYzlmMmI0Y2aDcmP9: --dhchap-ctrl-secret DHHC-1:02:MmQxNTRiNGRjZDkwOWI5YmQ4MmZjZjhiMWUwOTc2ZjE0ZmEwMzc0NTgyZGVkYzdiP7GVWQ==: 00:11:24.046 13:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --hostid cfa2def7-c8af-457f-82a0-b312efdea7f4 -l 0 --dhchap-secret DHHC-1:01:ZjQ2ZDlmZjJmZTk0NGE4YmY4YzZmOTMxYzlmMmI0Y2aDcmP9: --dhchap-ctrl-secret DHHC-1:02:MmQxNTRiNGRjZDkwOWI5YmQ4MmZjZjhiMWUwOTc2ZjE0ZmEwMzc0NTgyZGVkYzdiP7GVWQ==: 00:11:24.612 13:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:24.612 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:24.612 13:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:11:24.612 13:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.612 13:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.612 13:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.612 13:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:24.612 13:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:24.612 13:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:25.179 13:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:11:25.179 13:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:25.179 13:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:25.179 13:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:25.179 13:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:25.179 13:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:25.179 13:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:25.179 13:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.179 13:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.179 13:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.179 13:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:25.179 13:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:25.179 13:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:25.437 00:11:25.437 13:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:25.437 13:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:25.437 13:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:25.696 13:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:25.696 13:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:25.696 13:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.696 13:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.696 13:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.696 13:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:25.696 { 00:11:25.696 "cntlid": 109, 00:11:25.696 "qid": 0, 00:11:25.696 "state": "enabled", 00:11:25.696 "thread": "nvmf_tgt_poll_group_000", 00:11:25.696 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4", 00:11:25.696 "listen_address": { 00:11:25.696 "trtype": "TCP", 00:11:25.696 "adrfam": "IPv4", 00:11:25.696 "traddr": "10.0.0.3", 00:11:25.696 "trsvcid": "4420" 00:11:25.696 }, 00:11:25.696 "peer_address": { 00:11:25.696 "trtype": "TCP", 00:11:25.696 "adrfam": "IPv4", 00:11:25.696 "traddr": "10.0.0.1", 00:11:25.696 "trsvcid": "58438" 00:11:25.696 }, 00:11:25.696 "auth": { 00:11:25.696 "state": "completed", 00:11:25.696 "digest": "sha512", 00:11:25.696 "dhgroup": "ffdhe2048" 00:11:25.696 } 00:11:25.696 } 00:11:25.696 ]' 00:11:25.696 13:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:25.696 13:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:25.696 13:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:25.696 13:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:25.696 13:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:25.696 13:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:25.696 13:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:25.696 13:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:26.266 13:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDY0MDZkOGNlMjc4MWQ4ZjYyNTAyOTNjNWUzNGExOTQ2NjI0OGE3ZjZlNGQxOWM2YULh6g==: --dhchap-ctrl-secret DHHC-1:01:NzY5OTJjM2UzNDQ0NjQ2NzEyYjk2ZGExZTlkZTZkM2NLb/z4: 00:11:26.266 13:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --hostid cfa2def7-c8af-457f-82a0-b312efdea7f4 -l 0 --dhchap-secret DHHC-1:02:ZDY0MDZkOGNlMjc4MWQ4ZjYyNTAyOTNjNWUzNGExOTQ2NjI0OGE3ZjZlNGQxOWM2YULh6g==: --dhchap-ctrl-secret DHHC-1:01:NzY5OTJjM2UzNDQ0NjQ2NzEyYjk2ZGExZTlkZTZkM2NLb/z4: 00:11:26.832 13:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:26.832 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:26.832 13:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:11:26.832 13:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.832 13:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.832 13:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.832 13:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:26.832 13:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:26.832 13:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:27.091 13:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:11:27.091 13:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:27.091 13:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:27.091 13:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:27.091 13:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:27.091 13:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:27.091 13:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --dhchap-key key3 00:11:27.091 13:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.091 13:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.091 13:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.091 13:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:27.091 13:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:27.091 13:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:27.349 00:11:27.349 13:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:27.349 13:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:27.349 13:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:27.608 13:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:27.608 13:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:27.608 13:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.608 13:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.608 13:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.608 13:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:27.608 { 00:11:27.608 "cntlid": 111, 00:11:27.608 "qid": 0, 00:11:27.608 "state": "enabled", 00:11:27.608 "thread": "nvmf_tgt_poll_group_000", 00:11:27.608 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4", 00:11:27.608 "listen_address": { 00:11:27.608 "trtype": "TCP", 00:11:27.608 "adrfam": "IPv4", 00:11:27.608 "traddr": "10.0.0.3", 00:11:27.608 "trsvcid": "4420" 00:11:27.609 }, 00:11:27.609 "peer_address": { 00:11:27.609 "trtype": "TCP", 00:11:27.609 "adrfam": "IPv4", 00:11:27.609 "traddr": "10.0.0.1", 00:11:27.609 "trsvcid": "58472" 00:11:27.609 }, 00:11:27.609 "auth": { 00:11:27.609 "state": "completed", 00:11:27.609 "digest": "sha512", 00:11:27.609 "dhgroup": "ffdhe2048" 00:11:27.609 } 00:11:27.609 } 00:11:27.609 ]' 00:11:27.609 13:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:27.609 13:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:27.609 13:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:27.609 13:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:27.609 13:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:27.609 13:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:27.609 13:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:27.609 13:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:27.868 13:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2ZkZjI0Njc3NWQ4YTk5NDkyZGZlZDM1NjIzMDBkY2ZmMzQ2ZTQxZmIxYjQ3NzdiZjY3YTFhNjYxMGQ4YjQ0Y2MZpOQ=: 00:11:27.868 13:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --hostid cfa2def7-c8af-457f-82a0-b312efdea7f4 -l 0 --dhchap-secret DHHC-1:03:M2ZkZjI0Njc3NWQ4YTk5NDkyZGZlZDM1NjIzMDBkY2ZmMzQ2ZTQxZmIxYjQ3NzdiZjY3YTFhNjYxMGQ4YjQ0Y2MZpOQ=: 00:11:28.804 13:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:28.804 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:28.804 13:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:11:28.804 13:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.804 13:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.804 13:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.804 13:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:28.804 13:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:28.804 13:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:28.804 13:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:29.063 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:11:29.063 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:29.063 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:29.063 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:29.063 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:29.063 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:29.063 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:29.063 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.063 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.063 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.063 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:29.063 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:29.064 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:29.322 00:11:29.322 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:29.322 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:29.322 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:29.582 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:29.582 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:29.582 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.582 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.582 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.582 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:29.582 { 00:11:29.582 "cntlid": 113, 00:11:29.582 "qid": 0, 00:11:29.582 "state": "enabled", 00:11:29.582 "thread": "nvmf_tgt_poll_group_000", 00:11:29.582 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4", 00:11:29.582 "listen_address": { 00:11:29.582 "trtype": "TCP", 00:11:29.582 "adrfam": "IPv4", 00:11:29.582 "traddr": "10.0.0.3", 00:11:29.582 "trsvcid": "4420" 00:11:29.582 }, 00:11:29.582 "peer_address": { 00:11:29.582 "trtype": "TCP", 00:11:29.582 "adrfam": "IPv4", 00:11:29.582 "traddr": "10.0.0.1", 00:11:29.582 "trsvcid": "58506" 00:11:29.582 }, 00:11:29.582 "auth": { 00:11:29.582 "state": "completed", 00:11:29.582 "digest": "sha512", 00:11:29.582 "dhgroup": "ffdhe3072" 00:11:29.582 } 00:11:29.582 } 00:11:29.582 ]' 00:11:29.582 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:29.582 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:29.582 13:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:29.841 13:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:29.841 13:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:29.841 13:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:29.841 13:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:29.841 13:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:30.100 13:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2FmN2U1MTUzMjliYzBmZTU3M2ZjNjcwMDhjODg0OTdkMjI3NDkzZGE5NmFiN2EyQFmPpA==: --dhchap-ctrl-secret DHHC-1:03:Njg1NWJiYmZhMjkxMzRjNWM1NDE2OGNlYjc4YjgzMzRiMDVkZmJhMWQ0OGZlMjAyZjNhZDExNWY1YWQzNWE3Ntyf5n4=: 00:11:30.100 13:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --hostid cfa2def7-c8af-457f-82a0-b312efdea7f4 -l 0 --dhchap-secret DHHC-1:00:N2FmN2U1MTUzMjliYzBmZTU3M2ZjNjcwMDhjODg0OTdkMjI3NDkzZGE5NmFiN2EyQFmPpA==: --dhchap-ctrl-secret DHHC-1:03:Njg1NWJiYmZhMjkxMzRjNWM1NDE2OGNlYjc4YjgzMzRiMDVkZmJhMWQ0OGZlMjAyZjNhZDExNWY1YWQzNWE3Ntyf5n4=: 00:11:30.679 13:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:30.679 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:30.679 13:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:11:30.679 13:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.679 13:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.679 13:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.679 13:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:30.679 13:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:30.679 13:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:30.940 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:11:30.940 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:30.940 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:30.940 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:30.940 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:30.940 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:30.940 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:30.940 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.940 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.940 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.940 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:30.940 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:30.940 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:31.203 00:11:31.203 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:31.203 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:31.203 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:31.462 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:31.462 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:31.462 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.462 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.462 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.462 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:31.462 { 00:11:31.462 "cntlid": 115, 00:11:31.462 "qid": 0, 00:11:31.462 "state": "enabled", 00:11:31.462 "thread": "nvmf_tgt_poll_group_000", 00:11:31.462 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4", 00:11:31.462 "listen_address": { 00:11:31.462 "trtype": "TCP", 00:11:31.462 "adrfam": "IPv4", 00:11:31.462 "traddr": "10.0.0.3", 00:11:31.462 "trsvcid": "4420" 00:11:31.462 }, 00:11:31.462 "peer_address": { 00:11:31.462 "trtype": "TCP", 00:11:31.462 "adrfam": "IPv4", 00:11:31.462 "traddr": "10.0.0.1", 00:11:31.462 "trsvcid": "54556" 00:11:31.462 }, 00:11:31.462 "auth": { 00:11:31.462 "state": "completed", 00:11:31.462 "digest": "sha512", 00:11:31.462 "dhgroup": "ffdhe3072" 00:11:31.462 } 00:11:31.462 } 00:11:31.462 ]' 00:11:31.462 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:31.462 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:31.462 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:31.462 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:31.462 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:31.721 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:31.721 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:31.721 13:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:31.980 13:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjQ2ZDlmZjJmZTk0NGE4YmY4YzZmOTMxYzlmMmI0Y2aDcmP9: --dhchap-ctrl-secret DHHC-1:02:MmQxNTRiNGRjZDkwOWI5YmQ4MmZjZjhiMWUwOTc2ZjE0ZmEwMzc0NTgyZGVkYzdiP7GVWQ==: 00:11:31.980 13:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --hostid cfa2def7-c8af-457f-82a0-b312efdea7f4 -l 0 --dhchap-secret DHHC-1:01:ZjQ2ZDlmZjJmZTk0NGE4YmY4YzZmOTMxYzlmMmI0Y2aDcmP9: --dhchap-ctrl-secret DHHC-1:02:MmQxNTRiNGRjZDkwOWI5YmQ4MmZjZjhiMWUwOTc2ZjE0ZmEwMzc0NTgyZGVkYzdiP7GVWQ==: 00:11:32.548 13:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:32.548 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:32.548 13:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:11:32.548 13:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.548 13:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.548 13:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.548 13:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:32.548 13:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:32.548 13:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:32.807 13:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:11:32.807 13:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:32.807 13:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:32.807 13:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:32.807 13:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:32.807 13:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:32.808 13:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:32.808 13:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.808 13:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.808 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.808 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:32.808 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:32.808 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:33.066 00:11:33.066 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:33.066 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:33.066 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:33.324 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:33.324 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:33.324 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.324 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.324 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.324 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:33.324 { 00:11:33.324 "cntlid": 117, 00:11:33.324 "qid": 0, 00:11:33.324 "state": "enabled", 00:11:33.324 "thread": "nvmf_tgt_poll_group_000", 00:11:33.324 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4", 00:11:33.324 "listen_address": { 00:11:33.324 "trtype": "TCP", 00:11:33.324 "adrfam": "IPv4", 00:11:33.324 "traddr": "10.0.0.3", 00:11:33.324 "trsvcid": "4420" 00:11:33.324 }, 00:11:33.324 "peer_address": { 00:11:33.324 "trtype": "TCP", 00:11:33.324 "adrfam": "IPv4", 00:11:33.324 "traddr": "10.0.0.1", 00:11:33.324 "trsvcid": "54576" 00:11:33.324 }, 00:11:33.324 "auth": { 00:11:33.324 "state": "completed", 00:11:33.324 "digest": "sha512", 00:11:33.324 "dhgroup": "ffdhe3072" 00:11:33.324 } 00:11:33.324 } 00:11:33.324 ]' 00:11:33.324 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:33.325 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:33.325 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:33.584 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:33.584 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:33.584 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:33.584 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:33.584 13:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:33.844 13:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDY0MDZkOGNlMjc4MWQ4ZjYyNTAyOTNjNWUzNGExOTQ2NjI0OGE3ZjZlNGQxOWM2YULh6g==: --dhchap-ctrl-secret DHHC-1:01:NzY5OTJjM2UzNDQ0NjQ2NzEyYjk2ZGExZTlkZTZkM2NLb/z4: 00:11:33.844 13:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --hostid cfa2def7-c8af-457f-82a0-b312efdea7f4 -l 0 --dhchap-secret DHHC-1:02:ZDY0MDZkOGNlMjc4MWQ4ZjYyNTAyOTNjNWUzNGExOTQ2NjI0OGE3ZjZlNGQxOWM2YULh6g==: --dhchap-ctrl-secret DHHC-1:01:NzY5OTJjM2UzNDQ0NjQ2NzEyYjk2ZGExZTlkZTZkM2NLb/z4: 00:11:34.411 13:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:34.411 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:34.411 13:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:11:34.411 13:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.411 13:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.411 13:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.411 13:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:34.411 13:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:34.411 13:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:34.670 13:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:11:34.670 13:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:34.670 13:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:34.670 13:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:34.670 13:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:34.670 13:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:34.670 13:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --dhchap-key key3 00:11:34.670 13:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.670 13:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.670 13:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.670 13:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:34.670 13:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:34.670 13:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:34.929 00:11:34.929 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:34.929 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:34.929 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:35.497 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:35.497 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:35.497 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.497 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.497 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.497 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:35.497 { 00:11:35.497 "cntlid": 119, 00:11:35.497 "qid": 0, 00:11:35.497 "state": "enabled", 00:11:35.497 "thread": "nvmf_tgt_poll_group_000", 00:11:35.497 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4", 00:11:35.497 "listen_address": { 00:11:35.497 "trtype": "TCP", 00:11:35.497 "adrfam": "IPv4", 00:11:35.497 "traddr": "10.0.0.3", 00:11:35.497 "trsvcid": "4420" 00:11:35.497 }, 00:11:35.497 "peer_address": { 00:11:35.497 "trtype": "TCP", 00:11:35.497 "adrfam": "IPv4", 00:11:35.497 "traddr": "10.0.0.1", 00:11:35.497 "trsvcid": "54610" 00:11:35.497 }, 00:11:35.497 "auth": { 00:11:35.497 "state": "completed", 00:11:35.497 "digest": "sha512", 00:11:35.497 "dhgroup": "ffdhe3072" 00:11:35.497 } 00:11:35.497 } 00:11:35.497 ]' 00:11:35.497 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:35.497 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:35.497 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:35.497 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:35.497 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:35.497 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:35.497 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:35.497 13:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:35.756 13:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2ZkZjI0Njc3NWQ4YTk5NDkyZGZlZDM1NjIzMDBkY2ZmMzQ2ZTQxZmIxYjQ3NzdiZjY3YTFhNjYxMGQ4YjQ0Y2MZpOQ=: 00:11:35.756 13:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --hostid cfa2def7-c8af-457f-82a0-b312efdea7f4 -l 0 --dhchap-secret DHHC-1:03:M2ZkZjI0Njc3NWQ4YTk5NDkyZGZlZDM1NjIzMDBkY2ZmMzQ2ZTQxZmIxYjQ3NzdiZjY3YTFhNjYxMGQ4YjQ0Y2MZpOQ=: 00:11:36.321 13:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:36.321 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:36.321 13:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:11:36.321 13:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.321 13:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.321 13:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.321 13:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:36.321 13:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:36.321 13:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:36.321 13:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:36.580 13:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:11:36.580 13:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:36.580 13:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:36.580 13:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:36.580 13:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:36.580 13:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:36.580 13:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:36.580 13:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.580 13:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.580 13:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.580 13:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:36.580 13:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:36.580 13:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:36.838 00:11:36.838 13:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:36.838 13:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:36.838 13:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:37.097 13:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:37.097 13:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:37.097 13:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.097 13:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.097 13:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.097 13:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:37.097 { 00:11:37.097 "cntlid": 121, 00:11:37.097 "qid": 0, 00:11:37.097 "state": "enabled", 00:11:37.097 "thread": "nvmf_tgt_poll_group_000", 00:11:37.097 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4", 00:11:37.097 "listen_address": { 00:11:37.097 "trtype": "TCP", 00:11:37.097 "adrfam": "IPv4", 00:11:37.097 "traddr": "10.0.0.3", 00:11:37.097 "trsvcid": "4420" 00:11:37.097 }, 00:11:37.097 "peer_address": { 00:11:37.097 "trtype": "TCP", 00:11:37.097 "adrfam": "IPv4", 00:11:37.097 "traddr": "10.0.0.1", 00:11:37.097 "trsvcid": "54628" 00:11:37.097 }, 00:11:37.097 "auth": { 00:11:37.097 "state": "completed", 00:11:37.097 "digest": "sha512", 00:11:37.097 "dhgroup": "ffdhe4096" 00:11:37.097 } 00:11:37.097 } 00:11:37.097 ]' 00:11:37.097 13:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:37.097 13:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:37.098 13:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:37.357 13:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:37.357 13:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:37.357 13:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:37.357 13:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:37.357 13:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:37.616 13:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2FmN2U1MTUzMjliYzBmZTU3M2ZjNjcwMDhjODg0OTdkMjI3NDkzZGE5NmFiN2EyQFmPpA==: --dhchap-ctrl-secret DHHC-1:03:Njg1NWJiYmZhMjkxMzRjNWM1NDE2OGNlYjc4YjgzMzRiMDVkZmJhMWQ0OGZlMjAyZjNhZDExNWY1YWQzNWE3Ntyf5n4=: 00:11:37.616 13:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --hostid cfa2def7-c8af-457f-82a0-b312efdea7f4 -l 0 --dhchap-secret DHHC-1:00:N2FmN2U1MTUzMjliYzBmZTU3M2ZjNjcwMDhjODg0OTdkMjI3NDkzZGE5NmFiN2EyQFmPpA==: --dhchap-ctrl-secret DHHC-1:03:Njg1NWJiYmZhMjkxMzRjNWM1NDE2OGNlYjc4YjgzMzRiMDVkZmJhMWQ0OGZlMjAyZjNhZDExNWY1YWQzNWE3Ntyf5n4=: 00:11:38.183 13:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:38.183 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:38.183 13:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:11:38.183 13:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.183 13:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.183 13:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.183 13:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:38.183 13:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:38.183 13:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:38.442 13:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:11:38.442 13:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:38.442 13:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:38.442 13:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:38.442 13:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:38.442 13:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:38.442 13:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:38.442 13:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.442 13:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.442 13:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.442 13:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:38.442 13:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:38.442 13:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:38.700 00:11:38.700 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:38.700 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:38.700 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:38.959 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:38.959 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:38.959 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.959 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.959 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.959 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:38.959 { 00:11:38.959 "cntlid": 123, 00:11:38.959 "qid": 0, 00:11:38.959 "state": "enabled", 00:11:38.959 "thread": "nvmf_tgt_poll_group_000", 00:11:38.959 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4", 00:11:38.959 "listen_address": { 00:11:38.959 "trtype": "TCP", 00:11:38.959 "adrfam": "IPv4", 00:11:38.959 "traddr": "10.0.0.3", 00:11:38.959 "trsvcid": "4420" 00:11:38.959 }, 00:11:38.959 "peer_address": { 00:11:38.959 "trtype": "TCP", 00:11:38.959 "adrfam": "IPv4", 00:11:38.959 "traddr": "10.0.0.1", 00:11:38.959 "trsvcid": "54658" 00:11:38.959 }, 00:11:38.959 "auth": { 00:11:38.959 "state": "completed", 00:11:38.959 "digest": "sha512", 00:11:38.959 "dhgroup": "ffdhe4096" 00:11:38.959 } 00:11:38.959 } 00:11:38.959 ]' 00:11:38.959 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:38.959 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:38.959 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:38.959 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:38.959 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:39.218 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:39.218 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:39.218 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:39.477 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjQ2ZDlmZjJmZTk0NGE4YmY4YzZmOTMxYzlmMmI0Y2aDcmP9: --dhchap-ctrl-secret DHHC-1:02:MmQxNTRiNGRjZDkwOWI5YmQ4MmZjZjhiMWUwOTc2ZjE0ZmEwMzc0NTgyZGVkYzdiP7GVWQ==: 00:11:39.477 13:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --hostid cfa2def7-c8af-457f-82a0-b312efdea7f4 -l 0 --dhchap-secret DHHC-1:01:ZjQ2ZDlmZjJmZTk0NGE4YmY4YzZmOTMxYzlmMmI0Y2aDcmP9: --dhchap-ctrl-secret DHHC-1:02:MmQxNTRiNGRjZDkwOWI5YmQ4MmZjZjhiMWUwOTc2ZjE0ZmEwMzc0NTgyZGVkYzdiP7GVWQ==: 00:11:40.045 13:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:40.045 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:40.045 13:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:11:40.045 13:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.045 13:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.045 13:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.045 13:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:40.045 13:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:40.045 13:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:40.304 13:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:11:40.304 13:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:40.304 13:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:40.304 13:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:40.304 13:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:40.304 13:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:40.304 13:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:40.304 13:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.304 13:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.304 13:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.304 13:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:40.304 13:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:40.304 13:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:40.563 00:11:40.563 13:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:40.563 13:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:40.563 13:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:40.822 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:40.822 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:40.822 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.822 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.822 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.822 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:40.822 { 00:11:40.822 "cntlid": 125, 00:11:40.822 "qid": 0, 00:11:40.822 "state": "enabled", 00:11:40.822 "thread": "nvmf_tgt_poll_group_000", 00:11:40.822 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4", 00:11:40.822 "listen_address": { 00:11:40.822 "trtype": "TCP", 00:11:40.822 "adrfam": "IPv4", 00:11:40.822 "traddr": "10.0.0.3", 00:11:40.822 "trsvcid": "4420" 00:11:40.822 }, 00:11:40.822 "peer_address": { 00:11:40.822 "trtype": "TCP", 00:11:40.822 "adrfam": "IPv4", 00:11:40.822 "traddr": "10.0.0.1", 00:11:40.822 "trsvcid": "56662" 00:11:40.822 }, 00:11:40.822 "auth": { 00:11:40.822 "state": "completed", 00:11:40.822 "digest": "sha512", 00:11:40.822 "dhgroup": "ffdhe4096" 00:11:40.822 } 00:11:40.822 } 00:11:40.822 ]' 00:11:40.822 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:40.822 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:40.822 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:40.822 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:40.822 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:41.079 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:41.079 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:41.079 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:41.337 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDY0MDZkOGNlMjc4MWQ4ZjYyNTAyOTNjNWUzNGExOTQ2NjI0OGE3ZjZlNGQxOWM2YULh6g==: --dhchap-ctrl-secret DHHC-1:01:NzY5OTJjM2UzNDQ0NjQ2NzEyYjk2ZGExZTlkZTZkM2NLb/z4: 00:11:41.337 13:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --hostid cfa2def7-c8af-457f-82a0-b312efdea7f4 -l 0 --dhchap-secret DHHC-1:02:ZDY0MDZkOGNlMjc4MWQ4ZjYyNTAyOTNjNWUzNGExOTQ2NjI0OGE3ZjZlNGQxOWM2YULh6g==: --dhchap-ctrl-secret DHHC-1:01:NzY5OTJjM2UzNDQ0NjQ2NzEyYjk2ZGExZTlkZTZkM2NLb/z4: 00:11:41.905 13:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:41.905 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:41.905 13:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:11:41.905 13:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.905 13:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.905 13:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.905 13:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:41.905 13:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:41.905 13:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:42.164 13:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:11:42.164 13:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:42.164 13:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:42.164 13:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:42.164 13:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:42.164 13:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:42.164 13:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --dhchap-key key3 00:11:42.164 13:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.164 13:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.164 13:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.164 13:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:42.164 13:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:42.164 13:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:42.732 00:11:42.732 13:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:42.732 13:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:42.732 13:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:42.732 13:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:42.732 13:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:42.732 13:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.732 13:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.732 13:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.732 13:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:42.732 { 00:11:42.732 "cntlid": 127, 00:11:42.732 "qid": 0, 00:11:42.732 "state": "enabled", 00:11:42.732 "thread": "nvmf_tgt_poll_group_000", 00:11:42.732 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4", 00:11:42.732 "listen_address": { 00:11:42.732 "trtype": "TCP", 00:11:42.732 "adrfam": "IPv4", 00:11:42.732 "traddr": "10.0.0.3", 00:11:42.732 "trsvcid": "4420" 00:11:42.732 }, 00:11:42.732 "peer_address": { 00:11:42.732 "trtype": "TCP", 00:11:42.732 "adrfam": "IPv4", 00:11:42.732 "traddr": "10.0.0.1", 00:11:42.732 "trsvcid": "56688" 00:11:42.732 }, 00:11:42.732 "auth": { 00:11:42.732 "state": "completed", 00:11:42.732 "digest": "sha512", 00:11:42.732 "dhgroup": "ffdhe4096" 00:11:42.732 } 00:11:42.732 } 00:11:42.732 ]' 00:11:42.732 13:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:42.732 13:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:42.991 13:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:42.991 13:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:42.991 13:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:42.991 13:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:42.991 13:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:42.991 13:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:43.250 13:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2ZkZjI0Njc3NWQ4YTk5NDkyZGZlZDM1NjIzMDBkY2ZmMzQ2ZTQxZmIxYjQ3NzdiZjY3YTFhNjYxMGQ4YjQ0Y2MZpOQ=: 00:11:43.250 13:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --hostid cfa2def7-c8af-457f-82a0-b312efdea7f4 -l 0 --dhchap-secret DHHC-1:03:M2ZkZjI0Njc3NWQ4YTk5NDkyZGZlZDM1NjIzMDBkY2ZmMzQ2ZTQxZmIxYjQ3NzdiZjY3YTFhNjYxMGQ4YjQ0Y2MZpOQ=: 00:11:43.820 13:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:43.820 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:43.820 13:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:11:43.820 13:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.820 13:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.820 13:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.820 13:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:43.820 13:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:43.820 13:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:11:43.820 13:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:11:44.079 13:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:11:44.079 13:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:44.079 13:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:44.079 13:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:44.079 13:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:44.079 13:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:44.079 13:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:44.079 13:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.079 13:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.079 13:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.079 13:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:44.079 13:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:44.079 13:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:44.647 00:11:44.647 13:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:44.647 13:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:44.647 13:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:44.907 13:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:44.907 13:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:44.907 13:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.907 13:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.907 13:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.907 13:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:44.907 { 00:11:44.907 "cntlid": 129, 00:11:44.907 "qid": 0, 00:11:44.907 "state": "enabled", 00:11:44.907 "thread": "nvmf_tgt_poll_group_000", 00:11:44.907 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4", 00:11:44.907 "listen_address": { 00:11:44.907 "trtype": "TCP", 00:11:44.907 "adrfam": "IPv4", 00:11:44.907 "traddr": "10.0.0.3", 00:11:44.907 "trsvcid": "4420" 00:11:44.907 }, 00:11:44.907 "peer_address": { 00:11:44.907 "trtype": "TCP", 00:11:44.907 "adrfam": "IPv4", 00:11:44.907 "traddr": "10.0.0.1", 00:11:44.907 "trsvcid": "56714" 00:11:44.907 }, 00:11:44.907 "auth": { 00:11:44.907 "state": "completed", 00:11:44.907 "digest": "sha512", 00:11:44.907 "dhgroup": "ffdhe6144" 00:11:44.907 } 00:11:44.907 } 00:11:44.907 ]' 00:11:44.907 13:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:44.907 13:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:44.907 13:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:44.907 13:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:44.907 13:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:44.907 13:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:44.907 13:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:44.907 13:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:45.166 13:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2FmN2U1MTUzMjliYzBmZTU3M2ZjNjcwMDhjODg0OTdkMjI3NDkzZGE5NmFiN2EyQFmPpA==: --dhchap-ctrl-secret DHHC-1:03:Njg1NWJiYmZhMjkxMzRjNWM1NDE2OGNlYjc4YjgzMzRiMDVkZmJhMWQ0OGZlMjAyZjNhZDExNWY1YWQzNWE3Ntyf5n4=: 00:11:45.166 13:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --hostid cfa2def7-c8af-457f-82a0-b312efdea7f4 -l 0 --dhchap-secret DHHC-1:00:N2FmN2U1MTUzMjliYzBmZTU3M2ZjNjcwMDhjODg0OTdkMjI3NDkzZGE5NmFiN2EyQFmPpA==: --dhchap-ctrl-secret DHHC-1:03:Njg1NWJiYmZhMjkxMzRjNWM1NDE2OGNlYjc4YjgzMzRiMDVkZmJhMWQ0OGZlMjAyZjNhZDExNWY1YWQzNWE3Ntyf5n4=: 00:11:45.733 13:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:45.733 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:45.733 13:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:11:45.733 13:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.733 13:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.733 13:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.733 13:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:45.733 13:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:11:45.733 13:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:11:45.991 13:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:11:45.991 13:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:45.991 13:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:45.991 13:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:45.991 13:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:45.991 13:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:45.991 13:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:45.991 13:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.991 13:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.991 13:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.991 13:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:45.991 13:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:45.991 13:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:46.557 00:11:46.557 13:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:46.557 13:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:46.557 13:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:46.814 13:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:46.814 13:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:46.814 13:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.814 13:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.814 13:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.814 13:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:46.814 { 00:11:46.814 "cntlid": 131, 00:11:46.814 "qid": 0, 00:11:46.814 "state": "enabled", 00:11:46.814 "thread": "nvmf_tgt_poll_group_000", 00:11:46.814 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4", 00:11:46.814 "listen_address": { 00:11:46.814 "trtype": "TCP", 00:11:46.814 "adrfam": "IPv4", 00:11:46.814 "traddr": "10.0.0.3", 00:11:46.814 "trsvcid": "4420" 00:11:46.814 }, 00:11:46.814 "peer_address": { 00:11:46.814 "trtype": "TCP", 00:11:46.814 "adrfam": "IPv4", 00:11:46.814 "traddr": "10.0.0.1", 00:11:46.814 "trsvcid": "56752" 00:11:46.814 }, 00:11:46.814 "auth": { 00:11:46.815 "state": "completed", 00:11:46.815 "digest": "sha512", 00:11:46.815 "dhgroup": "ffdhe6144" 00:11:46.815 } 00:11:46.815 } 00:11:46.815 ]' 00:11:46.815 13:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:46.815 13:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:46.815 13:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:46.815 13:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:46.815 13:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:46.815 13:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:46.815 13:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:46.815 13:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:47.073 13:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjQ2ZDlmZjJmZTk0NGE4YmY4YzZmOTMxYzlmMmI0Y2aDcmP9: --dhchap-ctrl-secret DHHC-1:02:MmQxNTRiNGRjZDkwOWI5YmQ4MmZjZjhiMWUwOTc2ZjE0ZmEwMzc0NTgyZGVkYzdiP7GVWQ==: 00:11:47.073 13:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --hostid cfa2def7-c8af-457f-82a0-b312efdea7f4 -l 0 --dhchap-secret DHHC-1:01:ZjQ2ZDlmZjJmZTk0NGE4YmY4YzZmOTMxYzlmMmI0Y2aDcmP9: --dhchap-ctrl-secret DHHC-1:02:MmQxNTRiNGRjZDkwOWI5YmQ4MmZjZjhiMWUwOTc2ZjE0ZmEwMzc0NTgyZGVkYzdiP7GVWQ==: 00:11:48.004 13:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:48.004 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:48.004 13:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:11:48.004 13:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.004 13:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.004 13:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.004 13:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:48.004 13:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:11:48.004 13:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:11:48.004 13:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:11:48.004 13:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:48.005 13:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:48.005 13:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:48.005 13:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:48.005 13:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:48.005 13:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:48.005 13:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.005 13:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.005 13:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.005 13:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:48.005 13:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:48.005 13:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:48.569 00:11:48.569 13:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:48.569 13:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:48.569 13:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:48.827 13:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:48.827 13:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:48.827 13:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.827 13:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.827 13:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.827 13:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:48.827 { 00:11:48.827 "cntlid": 133, 00:11:48.827 "qid": 0, 00:11:48.827 "state": "enabled", 00:11:48.827 "thread": "nvmf_tgt_poll_group_000", 00:11:48.827 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4", 00:11:48.827 "listen_address": { 00:11:48.827 "trtype": "TCP", 00:11:48.827 "adrfam": "IPv4", 00:11:48.827 "traddr": "10.0.0.3", 00:11:48.827 "trsvcid": "4420" 00:11:48.827 }, 00:11:48.827 "peer_address": { 00:11:48.827 "trtype": "TCP", 00:11:48.827 "adrfam": "IPv4", 00:11:48.827 "traddr": "10.0.0.1", 00:11:48.827 "trsvcid": "56768" 00:11:48.827 }, 00:11:48.827 "auth": { 00:11:48.827 "state": "completed", 00:11:48.827 "digest": "sha512", 00:11:48.827 "dhgroup": "ffdhe6144" 00:11:48.827 } 00:11:48.827 } 00:11:48.827 ]' 00:11:48.827 13:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:48.827 13:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:48.827 13:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:49.085 13:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:49.085 13:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:49.085 13:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:49.085 13:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:49.085 13:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:49.343 13:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDY0MDZkOGNlMjc4MWQ4ZjYyNTAyOTNjNWUzNGExOTQ2NjI0OGE3ZjZlNGQxOWM2YULh6g==: --dhchap-ctrl-secret DHHC-1:01:NzY5OTJjM2UzNDQ0NjQ2NzEyYjk2ZGExZTlkZTZkM2NLb/z4: 00:11:49.343 13:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --hostid cfa2def7-c8af-457f-82a0-b312efdea7f4 -l 0 --dhchap-secret DHHC-1:02:ZDY0MDZkOGNlMjc4MWQ4ZjYyNTAyOTNjNWUzNGExOTQ2NjI0OGE3ZjZlNGQxOWM2YULh6g==: --dhchap-ctrl-secret DHHC-1:01:NzY5OTJjM2UzNDQ0NjQ2NzEyYjk2ZGExZTlkZTZkM2NLb/z4: 00:11:49.909 13:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:49.909 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:49.909 13:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:11:49.909 13:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.909 13:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.909 13:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.909 13:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:49.909 13:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:11:49.909 13:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:11:50.168 13:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:11:50.168 13:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:50.168 13:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:50.168 13:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:50.168 13:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:50.168 13:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:50.168 13:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --dhchap-key key3 00:11:50.168 13:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.168 13:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.168 13:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.168 13:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:50.169 13:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:50.169 13:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:50.736 00:11:50.736 13:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:50.736 13:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:50.736 13:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:50.995 13:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:50.995 13:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:50.995 13:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.995 13:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.995 13:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.995 13:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:50.995 { 00:11:50.995 "cntlid": 135, 00:11:50.995 "qid": 0, 00:11:50.995 "state": "enabled", 00:11:50.995 "thread": "nvmf_tgt_poll_group_000", 00:11:50.995 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4", 00:11:50.995 "listen_address": { 00:11:50.995 "trtype": "TCP", 00:11:50.995 "adrfam": "IPv4", 00:11:50.995 "traddr": "10.0.0.3", 00:11:50.995 "trsvcid": "4420" 00:11:50.995 }, 00:11:50.995 "peer_address": { 00:11:50.995 "trtype": "TCP", 00:11:50.996 "adrfam": "IPv4", 00:11:50.996 "traddr": "10.0.0.1", 00:11:50.996 "trsvcid": "49352" 00:11:50.996 }, 00:11:50.996 "auth": { 00:11:50.996 "state": "completed", 00:11:50.996 "digest": "sha512", 00:11:50.996 "dhgroup": "ffdhe6144" 00:11:50.996 } 00:11:50.996 } 00:11:50.996 ]' 00:11:50.996 13:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:50.996 13:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:50.996 13:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:50.996 13:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:50.996 13:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:51.255 13:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:51.255 13:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:51.255 13:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:51.515 13:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2ZkZjI0Njc3NWQ4YTk5NDkyZGZlZDM1NjIzMDBkY2ZmMzQ2ZTQxZmIxYjQ3NzdiZjY3YTFhNjYxMGQ4YjQ0Y2MZpOQ=: 00:11:51.515 13:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --hostid cfa2def7-c8af-457f-82a0-b312efdea7f4 -l 0 --dhchap-secret DHHC-1:03:M2ZkZjI0Njc3NWQ4YTk5NDkyZGZlZDM1NjIzMDBkY2ZmMzQ2ZTQxZmIxYjQ3NzdiZjY3YTFhNjYxMGQ4YjQ0Y2MZpOQ=: 00:11:52.092 13:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:52.092 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:52.092 13:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:11:52.092 13:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.092 13:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.092 13:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.092 13:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:52.092 13:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:52.092 13:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:11:52.092 13:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:11:52.350 13:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:11:52.350 13:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:52.350 13:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:52.350 13:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:52.350 13:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:52.350 13:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:52.350 13:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:52.350 13:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.350 13:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.350 13:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.350 13:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:52.350 13:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:52.350 13:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:52.918 00:11:52.918 13:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:52.918 13:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:52.918 13:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:53.177 13:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:53.177 13:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:53.177 13:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.177 13:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.177 13:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.177 13:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:53.177 { 00:11:53.177 "cntlid": 137, 00:11:53.177 "qid": 0, 00:11:53.177 "state": "enabled", 00:11:53.177 "thread": "nvmf_tgt_poll_group_000", 00:11:53.177 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4", 00:11:53.177 "listen_address": { 00:11:53.177 "trtype": "TCP", 00:11:53.177 "adrfam": "IPv4", 00:11:53.177 "traddr": "10.0.0.3", 00:11:53.177 "trsvcid": "4420" 00:11:53.177 }, 00:11:53.177 "peer_address": { 00:11:53.177 "trtype": "TCP", 00:11:53.177 "adrfam": "IPv4", 00:11:53.177 "traddr": "10.0.0.1", 00:11:53.177 "trsvcid": "49378" 00:11:53.177 }, 00:11:53.177 "auth": { 00:11:53.177 "state": "completed", 00:11:53.177 "digest": "sha512", 00:11:53.177 "dhgroup": "ffdhe8192" 00:11:53.177 } 00:11:53.177 } 00:11:53.177 ]' 00:11:53.177 13:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:53.177 13:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:53.177 13:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:53.178 13:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:53.178 13:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:53.436 13:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:53.437 13:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:53.437 13:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:53.695 13:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2FmN2U1MTUzMjliYzBmZTU3M2ZjNjcwMDhjODg0OTdkMjI3NDkzZGE5NmFiN2EyQFmPpA==: --dhchap-ctrl-secret DHHC-1:03:Njg1NWJiYmZhMjkxMzRjNWM1NDE2OGNlYjc4YjgzMzRiMDVkZmJhMWQ0OGZlMjAyZjNhZDExNWY1YWQzNWE3Ntyf5n4=: 00:11:53.695 13:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --hostid cfa2def7-c8af-457f-82a0-b312efdea7f4 -l 0 --dhchap-secret DHHC-1:00:N2FmN2U1MTUzMjliYzBmZTU3M2ZjNjcwMDhjODg0OTdkMjI3NDkzZGE5NmFiN2EyQFmPpA==: --dhchap-ctrl-secret DHHC-1:03:Njg1NWJiYmZhMjkxMzRjNWM1NDE2OGNlYjc4YjgzMzRiMDVkZmJhMWQ0OGZlMjAyZjNhZDExNWY1YWQzNWE3Ntyf5n4=: 00:11:54.263 13:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:54.263 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:54.264 13:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:11:54.264 13:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.264 13:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.264 13:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.264 13:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:54.264 13:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:11:54.264 13:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:11:54.522 13:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:11:54.522 13:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:54.522 13:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:54.522 13:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:54.522 13:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:54.522 13:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:54.522 13:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:54.522 13:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.522 13:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.522 13:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.522 13:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:54.522 13:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:54.522 13:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:55.089 00:11:55.089 13:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:55.089 13:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:55.089 13:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:55.348 13:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:55.348 13:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:55.348 13:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.348 13:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.348 13:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.348 13:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:55.348 { 00:11:55.348 "cntlid": 139, 00:11:55.348 "qid": 0, 00:11:55.348 "state": "enabled", 00:11:55.348 "thread": "nvmf_tgt_poll_group_000", 00:11:55.348 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4", 00:11:55.348 "listen_address": { 00:11:55.348 "trtype": "TCP", 00:11:55.348 "adrfam": "IPv4", 00:11:55.348 "traddr": "10.0.0.3", 00:11:55.348 "trsvcid": "4420" 00:11:55.348 }, 00:11:55.348 "peer_address": { 00:11:55.348 "trtype": "TCP", 00:11:55.348 "adrfam": "IPv4", 00:11:55.348 "traddr": "10.0.0.1", 00:11:55.348 "trsvcid": "49402" 00:11:55.348 }, 00:11:55.348 "auth": { 00:11:55.348 "state": "completed", 00:11:55.348 "digest": "sha512", 00:11:55.348 "dhgroup": "ffdhe8192" 00:11:55.348 } 00:11:55.348 } 00:11:55.348 ]' 00:11:55.348 13:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:55.348 13:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:55.348 13:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:55.607 13:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:55.607 13:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:55.607 13:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:55.607 13:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:55.607 13:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:55.867 13:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjQ2ZDlmZjJmZTk0NGE4YmY4YzZmOTMxYzlmMmI0Y2aDcmP9: --dhchap-ctrl-secret DHHC-1:02:MmQxNTRiNGRjZDkwOWI5YmQ4MmZjZjhiMWUwOTc2ZjE0ZmEwMzc0NTgyZGVkYzdiP7GVWQ==: 00:11:55.867 13:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --hostid cfa2def7-c8af-457f-82a0-b312efdea7f4 -l 0 --dhchap-secret DHHC-1:01:ZjQ2ZDlmZjJmZTk0NGE4YmY4YzZmOTMxYzlmMmI0Y2aDcmP9: --dhchap-ctrl-secret DHHC-1:02:MmQxNTRiNGRjZDkwOWI5YmQ4MmZjZjhiMWUwOTc2ZjE0ZmEwMzc0NTgyZGVkYzdiP7GVWQ==: 00:11:56.435 13:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:56.435 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:56.435 13:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:11:56.435 13:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.435 13:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.435 13:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.436 13:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:56.436 13:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:11:56.436 13:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:11:56.731 13:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:11:56.731 13:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:56.731 13:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:56.731 13:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:56.731 13:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:56.731 13:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:56.731 13:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:56.731 13:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.731 13:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.731 13:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.731 13:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:56.731 13:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:56.731 13:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:57.298 00:11:57.298 13:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:57.298 13:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:57.298 13:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:57.556 13:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:57.556 13:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:57.556 13:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.556 13:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.556 13:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.556 13:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:57.556 { 00:11:57.556 "cntlid": 141, 00:11:57.556 "qid": 0, 00:11:57.556 "state": "enabled", 00:11:57.556 "thread": "nvmf_tgt_poll_group_000", 00:11:57.556 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4", 00:11:57.556 "listen_address": { 00:11:57.556 "trtype": "TCP", 00:11:57.556 "adrfam": "IPv4", 00:11:57.556 "traddr": "10.0.0.3", 00:11:57.556 "trsvcid": "4420" 00:11:57.556 }, 00:11:57.556 "peer_address": { 00:11:57.556 "trtype": "TCP", 00:11:57.556 "adrfam": "IPv4", 00:11:57.556 "traddr": "10.0.0.1", 00:11:57.556 "trsvcid": "49426" 00:11:57.556 }, 00:11:57.556 "auth": { 00:11:57.556 "state": "completed", 00:11:57.556 "digest": "sha512", 00:11:57.556 "dhgroup": "ffdhe8192" 00:11:57.556 } 00:11:57.556 } 00:11:57.556 ]' 00:11:57.556 13:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:57.556 13:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:57.556 13:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:57.815 13:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:57.815 13:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:57.815 13:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:57.815 13:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:57.815 13:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:58.073 13:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDY0MDZkOGNlMjc4MWQ4ZjYyNTAyOTNjNWUzNGExOTQ2NjI0OGE3ZjZlNGQxOWM2YULh6g==: --dhchap-ctrl-secret DHHC-1:01:NzY5OTJjM2UzNDQ0NjQ2NzEyYjk2ZGExZTlkZTZkM2NLb/z4: 00:11:58.073 13:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --hostid cfa2def7-c8af-457f-82a0-b312efdea7f4 -l 0 --dhchap-secret DHHC-1:02:ZDY0MDZkOGNlMjc4MWQ4ZjYyNTAyOTNjNWUzNGExOTQ2NjI0OGE3ZjZlNGQxOWM2YULh6g==: --dhchap-ctrl-secret DHHC-1:01:NzY5OTJjM2UzNDQ0NjQ2NzEyYjk2ZGExZTlkZTZkM2NLb/z4: 00:11:58.641 13:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:58.641 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:58.641 13:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:11:58.641 13:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.641 13:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.641 13:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.641 13:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:58.641 13:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:11:58.641 13:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:11:58.900 13:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:11:58.900 13:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:58.900 13:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:58.900 13:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:58.900 13:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:58.900 13:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:58.900 13:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --dhchap-key key3 00:11:58.900 13:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.900 13:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.900 13:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.900 13:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:58.900 13:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:58.900 13:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:59.467 00:11:59.467 13:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:59.467 13:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:59.467 13:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:59.726 13:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:59.726 13:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:59.726 13:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.726 13:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.726 13:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.726 13:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:59.726 { 00:11:59.726 "cntlid": 143, 00:11:59.726 "qid": 0, 00:11:59.726 "state": "enabled", 00:11:59.726 "thread": "nvmf_tgt_poll_group_000", 00:11:59.726 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4", 00:11:59.726 "listen_address": { 00:11:59.726 "trtype": "TCP", 00:11:59.726 "adrfam": "IPv4", 00:11:59.726 "traddr": "10.0.0.3", 00:11:59.726 "trsvcid": "4420" 00:11:59.726 }, 00:11:59.726 "peer_address": { 00:11:59.726 "trtype": "TCP", 00:11:59.726 "adrfam": "IPv4", 00:11:59.726 "traddr": "10.0.0.1", 00:11:59.726 "trsvcid": "49450" 00:11:59.726 }, 00:11:59.726 "auth": { 00:11:59.726 "state": "completed", 00:11:59.726 "digest": "sha512", 00:11:59.726 "dhgroup": "ffdhe8192" 00:11:59.726 } 00:11:59.726 } 00:11:59.726 ]' 00:11:59.726 13:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:59.726 13:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:59.726 13:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:59.985 13:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:59.985 13:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:59.985 13:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:59.985 13:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:59.985 13:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:00.244 13:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2ZkZjI0Njc3NWQ4YTk5NDkyZGZlZDM1NjIzMDBkY2ZmMzQ2ZTQxZmIxYjQ3NzdiZjY3YTFhNjYxMGQ4YjQ0Y2MZpOQ=: 00:12:00.244 13:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --hostid cfa2def7-c8af-457f-82a0-b312efdea7f4 -l 0 --dhchap-secret DHHC-1:03:M2ZkZjI0Njc3NWQ4YTk5NDkyZGZlZDM1NjIzMDBkY2ZmMzQ2ZTQxZmIxYjQ3NzdiZjY3YTFhNjYxMGQ4YjQ0Y2MZpOQ=: 00:12:00.812 13:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:00.812 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:00.812 13:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:12:00.812 13:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.812 13:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.812 13:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.812 13:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:12:00.812 13:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:12:00.812 13:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:12:00.812 13:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:00.812 13:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:00.812 13:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:01.072 13:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:12:01.072 13:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:01.072 13:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:01.072 13:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:01.072 13:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:01.072 13:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:01.072 13:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:01.072 13:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.072 13:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.072 13:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.072 13:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:01.072 13:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:01.072 13:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:01.641 00:12:01.641 13:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:01.641 13:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:01.641 13:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:01.900 13:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:01.900 13:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:01.900 13:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.900 13:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.900 13:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.900 13:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:01.900 { 00:12:01.900 "cntlid": 145, 00:12:01.900 "qid": 0, 00:12:01.900 "state": "enabled", 00:12:01.900 "thread": "nvmf_tgt_poll_group_000", 00:12:01.900 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4", 00:12:01.900 "listen_address": { 00:12:01.900 "trtype": "TCP", 00:12:01.900 "adrfam": "IPv4", 00:12:01.900 "traddr": "10.0.0.3", 00:12:01.900 "trsvcid": "4420" 00:12:01.900 }, 00:12:01.900 "peer_address": { 00:12:01.900 "trtype": "TCP", 00:12:01.900 "adrfam": "IPv4", 00:12:01.900 "traddr": "10.0.0.1", 00:12:01.900 "trsvcid": "39772" 00:12:01.900 }, 00:12:01.900 "auth": { 00:12:01.900 "state": "completed", 00:12:01.900 "digest": "sha512", 00:12:01.900 "dhgroup": "ffdhe8192" 00:12:01.900 } 00:12:01.900 } 00:12:01.900 ]' 00:12:01.900 13:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:02.159 13:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:02.159 13:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:02.159 13:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:02.159 13:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:02.159 13:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:02.159 13:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:02.159 13:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:02.419 13:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2FmN2U1MTUzMjliYzBmZTU3M2ZjNjcwMDhjODg0OTdkMjI3NDkzZGE5NmFiN2EyQFmPpA==: --dhchap-ctrl-secret DHHC-1:03:Njg1NWJiYmZhMjkxMzRjNWM1NDE2OGNlYjc4YjgzMzRiMDVkZmJhMWQ0OGZlMjAyZjNhZDExNWY1YWQzNWE3Ntyf5n4=: 00:12:02.419 13:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --hostid cfa2def7-c8af-457f-82a0-b312efdea7f4 -l 0 --dhchap-secret DHHC-1:00:N2FmN2U1MTUzMjliYzBmZTU3M2ZjNjcwMDhjODg0OTdkMjI3NDkzZGE5NmFiN2EyQFmPpA==: --dhchap-ctrl-secret DHHC-1:03:Njg1NWJiYmZhMjkxMzRjNWM1NDE2OGNlYjc4YjgzMzRiMDVkZmJhMWQ0OGZlMjAyZjNhZDExNWY1YWQzNWE3Ntyf5n4=: 00:12:02.987 13:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:02.987 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:02.987 13:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:12:02.987 13:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.988 13:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.988 13:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.988 13:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --dhchap-key key1 00:12:02.988 13:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.988 13:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.988 13:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.988 13:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:12:02.988 13:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:12:02.988 13:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:12:02.988 13:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:12:02.988 13:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:02.988 13:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:12:02.988 13:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:02.988 13:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:12:02.988 13:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:12:02.988 13:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:12:03.925 request: 00:12:03.925 { 00:12:03.925 "name": "nvme0", 00:12:03.925 "trtype": "tcp", 00:12:03.925 "traddr": "10.0.0.3", 00:12:03.925 "adrfam": "ipv4", 00:12:03.925 "trsvcid": "4420", 00:12:03.925 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:03.925 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4", 00:12:03.925 "prchk_reftag": false, 00:12:03.925 "prchk_guard": false, 00:12:03.925 "hdgst": false, 00:12:03.925 "ddgst": false, 00:12:03.925 "dhchap_key": "key2", 00:12:03.925 "allow_unrecognized_csi": false, 00:12:03.925 "method": "bdev_nvme_attach_controller", 00:12:03.925 "req_id": 1 00:12:03.925 } 00:12:03.925 Got JSON-RPC error response 00:12:03.925 response: 00:12:03.925 { 00:12:03.925 "code": -5, 00:12:03.925 "message": "Input/output error" 00:12:03.925 } 00:12:03.925 13:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:12:03.925 13:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:03.925 13:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:03.925 13:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:03.925 13:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:12:03.925 13:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.925 13:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.925 13:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.925 13:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:03.925 13:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.925 13:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.925 13:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.925 13:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:03.925 13:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:12:03.925 13:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:03.925 13:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:12:03.925 13:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:03.925 13:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:12:03.925 13:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:03.925 13:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:03.925 13:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:03.926 13:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:04.185 request: 00:12:04.185 { 00:12:04.185 "name": "nvme0", 00:12:04.185 "trtype": "tcp", 00:12:04.185 "traddr": "10.0.0.3", 00:12:04.185 "adrfam": "ipv4", 00:12:04.185 "trsvcid": "4420", 00:12:04.185 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:04.185 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4", 00:12:04.185 "prchk_reftag": false, 00:12:04.185 "prchk_guard": false, 00:12:04.185 "hdgst": false, 00:12:04.185 "ddgst": false, 00:12:04.185 "dhchap_key": "key1", 00:12:04.185 "dhchap_ctrlr_key": "ckey2", 00:12:04.185 "allow_unrecognized_csi": false, 00:12:04.185 "method": "bdev_nvme_attach_controller", 00:12:04.185 "req_id": 1 00:12:04.185 } 00:12:04.185 Got JSON-RPC error response 00:12:04.185 response: 00:12:04.185 { 00:12:04.185 "code": -5, 00:12:04.185 "message": "Input/output error" 00:12:04.185 } 00:12:04.185 13:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:12:04.185 13:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:04.185 13:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:04.185 13:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:04.185 13:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:12:04.185 13:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.185 13:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.185 13:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.185 13:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --dhchap-key key1 00:12:04.185 13:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.185 13:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.444 13:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.444 13:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:04.444 13:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:12:04.444 13:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:04.444 13:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:12:04.444 13:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:04.444 13:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:12:04.444 13:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:04.444 13:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:04.444 13:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:04.444 13:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:04.704 request: 00:12:04.704 { 00:12:04.704 "name": "nvme0", 00:12:04.704 "trtype": "tcp", 00:12:04.704 "traddr": "10.0.0.3", 00:12:04.704 "adrfam": "ipv4", 00:12:04.704 "trsvcid": "4420", 00:12:04.704 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:04.704 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4", 00:12:04.704 "prchk_reftag": false, 00:12:04.704 "prchk_guard": false, 00:12:04.704 "hdgst": false, 00:12:04.704 "ddgst": false, 00:12:04.704 "dhchap_key": "key1", 00:12:04.704 "dhchap_ctrlr_key": "ckey1", 00:12:04.704 "allow_unrecognized_csi": false, 00:12:04.704 "method": "bdev_nvme_attach_controller", 00:12:04.704 "req_id": 1 00:12:04.704 } 00:12:04.704 Got JSON-RPC error response 00:12:04.704 response: 00:12:04.704 { 00:12:04.704 "code": -5, 00:12:04.704 "message": "Input/output error" 00:12:04.704 } 00:12:04.704 13:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:12:04.704 13:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:04.704 13:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:04.704 13:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:04.704 13:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:12:04.704 13:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.704 13:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.964 13:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.964 13:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 67246 00:12:04.964 13:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 67246 ']' 00:12:04.964 13:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 67246 00:12:04.964 13:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:12:04.964 13:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:04.964 13:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67246 00:12:04.964 killing process with pid 67246 00:12:04.964 13:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:04.964 13:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:04.964 13:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67246' 00:12:04.964 13:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 67246 00:12:04.964 13:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 67246 00:12:05.223 13:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:12:05.223 13:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:05.223 13:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:05.223 13:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.223 13:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=70217 00:12:05.223 13:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 70217 00:12:05.223 13:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 70217 ']' 00:12:05.223 13:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:12:05.223 13:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:05.223 13:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:05.223 13:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:05.223 13:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:05.223 13:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.160 13:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:06.160 13:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:12:06.160 13:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:06.160 13:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:06.160 13:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.160 13:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:06.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:06.160 13:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:12:06.160 13:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 70217 00:12:06.160 13:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 70217 ']' 00:12:06.160 13:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:06.160 13:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:06.160 13:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:06.160 13:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:06.160 13:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.727 13:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:06.727 13:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:12:06.727 13:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:12:06.727 13:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.727 13:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.727 null0 00:12:06.727 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.727 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:12:06.727 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.J6I 00:12:06.727 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.727 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.727 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.727 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.IVa ]] 00:12:06.727 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.IVa 00:12:06.727 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.727 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.727 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.727 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:12:06.727 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Abw 00:12:06.727 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.727 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.727 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.727 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.EPI ]] 00:12:06.727 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.EPI 00:12:06.727 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.727 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.727 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.727 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:12:06.727 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.k4F 00:12:06.727 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.727 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.727 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.727 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.Lsg ]] 00:12:06.727 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Lsg 00:12:06.727 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.727 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.727 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.727 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:12:06.727 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.rcE 00:12:06.728 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.728 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.728 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.728 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:12:06.728 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:12:06.728 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:06.728 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:06.728 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:06.728 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:06.728 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:06.728 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --dhchap-key key3 00:12:06.728 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.728 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.728 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.728 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:06.728 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:06.728 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:07.665 nvme0n1 00:12:07.665 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:07.665 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:07.665 13:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:07.924 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:07.924 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:07.924 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.924 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.924 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.924 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:07.924 { 00:12:07.924 "cntlid": 1, 00:12:07.924 "qid": 0, 00:12:07.924 "state": "enabled", 00:12:07.924 "thread": "nvmf_tgt_poll_group_000", 00:12:07.924 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4", 00:12:07.924 "listen_address": { 00:12:07.924 "trtype": "TCP", 00:12:07.924 "adrfam": "IPv4", 00:12:07.924 "traddr": "10.0.0.3", 00:12:07.924 "trsvcid": "4420" 00:12:07.924 }, 00:12:07.924 "peer_address": { 00:12:07.924 "trtype": "TCP", 00:12:07.924 "adrfam": "IPv4", 00:12:07.924 "traddr": "10.0.0.1", 00:12:07.924 "trsvcid": "39818" 00:12:07.924 }, 00:12:07.924 "auth": { 00:12:07.924 "state": "completed", 00:12:07.924 "digest": "sha512", 00:12:07.924 "dhgroup": "ffdhe8192" 00:12:07.924 } 00:12:07.924 } 00:12:07.924 ]' 00:12:07.924 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:07.924 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:07.924 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:07.924 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:07.924 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:08.183 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:08.183 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:08.183 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:08.443 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2ZkZjI0Njc3NWQ4YTk5NDkyZGZlZDM1NjIzMDBkY2ZmMzQ2ZTQxZmIxYjQ3NzdiZjY3YTFhNjYxMGQ4YjQ0Y2MZpOQ=: 00:12:08.443 13:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --hostid cfa2def7-c8af-457f-82a0-b312efdea7f4 -l 0 --dhchap-secret DHHC-1:03:M2ZkZjI0Njc3NWQ4YTk5NDkyZGZlZDM1NjIzMDBkY2ZmMzQ2ZTQxZmIxYjQ3NzdiZjY3YTFhNjYxMGQ4YjQ0Y2MZpOQ=: 00:12:09.012 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:09.012 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:09.012 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:12:09.012 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.012 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.012 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.012 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --dhchap-key key3 00:12:09.012 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.012 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.012 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.012 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:12:09.012 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:12:09.271 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:12:09.271 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:12:09.271 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:12:09.271 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:12:09.271 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:09.271 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:12:09.271 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:09.271 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:09.271 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:09.271 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:09.530 request: 00:12:09.530 { 00:12:09.530 "name": "nvme0", 00:12:09.530 "trtype": "tcp", 00:12:09.530 "traddr": "10.0.0.3", 00:12:09.530 "adrfam": "ipv4", 00:12:09.530 "trsvcid": "4420", 00:12:09.530 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:09.530 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4", 00:12:09.530 "prchk_reftag": false, 00:12:09.530 "prchk_guard": false, 00:12:09.530 "hdgst": false, 00:12:09.530 "ddgst": false, 00:12:09.530 "dhchap_key": "key3", 00:12:09.530 "allow_unrecognized_csi": false, 00:12:09.530 "method": "bdev_nvme_attach_controller", 00:12:09.530 "req_id": 1 00:12:09.530 } 00:12:09.530 Got JSON-RPC error response 00:12:09.530 response: 00:12:09.530 { 00:12:09.530 "code": -5, 00:12:09.530 "message": "Input/output error" 00:12:09.530 } 00:12:09.530 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:12:09.530 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:09.530 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:09.530 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:09.530 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:12:09.530 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:12:09.530 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:12:09.530 13:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:12:10.098 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:12:10.098 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:12:10.098 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:12:10.098 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:12:10.098 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:10.098 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:12:10.098 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:10.098 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:10.098 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:10.098 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:10.098 request: 00:12:10.098 { 00:12:10.098 "name": "nvme0", 00:12:10.098 "trtype": "tcp", 00:12:10.098 "traddr": "10.0.0.3", 00:12:10.098 "adrfam": "ipv4", 00:12:10.098 "trsvcid": "4420", 00:12:10.098 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:10.098 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4", 00:12:10.098 "prchk_reftag": false, 00:12:10.098 "prchk_guard": false, 00:12:10.098 "hdgst": false, 00:12:10.098 "ddgst": false, 00:12:10.098 "dhchap_key": "key3", 00:12:10.098 "allow_unrecognized_csi": false, 00:12:10.098 "method": "bdev_nvme_attach_controller", 00:12:10.098 "req_id": 1 00:12:10.098 } 00:12:10.098 Got JSON-RPC error response 00:12:10.098 response: 00:12:10.098 { 00:12:10.098 "code": -5, 00:12:10.098 "message": "Input/output error" 00:12:10.098 } 00:12:10.098 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:12:10.098 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:10.098 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:10.098 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:10.098 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:12:10.098 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:12:10.098 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:12:10.098 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:10.098 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:10.098 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:10.356 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:12:10.356 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.356 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.356 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.356 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:12:10.356 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.356 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.356 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.356 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:10.356 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:12:10.356 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:10.356 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:12:10.356 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:10.356 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:12:10.356 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:10.356 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:10.356 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:10.356 13:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:10.923 request: 00:12:10.923 { 00:12:10.923 "name": "nvme0", 00:12:10.923 "trtype": "tcp", 00:12:10.923 "traddr": "10.0.0.3", 00:12:10.923 "adrfam": "ipv4", 00:12:10.923 "trsvcid": "4420", 00:12:10.923 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:10.923 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4", 00:12:10.923 "prchk_reftag": false, 00:12:10.923 "prchk_guard": false, 00:12:10.923 "hdgst": false, 00:12:10.923 "ddgst": false, 00:12:10.923 "dhchap_key": "key0", 00:12:10.923 "dhchap_ctrlr_key": "key1", 00:12:10.923 "allow_unrecognized_csi": false, 00:12:10.923 "method": "bdev_nvme_attach_controller", 00:12:10.923 "req_id": 1 00:12:10.923 } 00:12:10.923 Got JSON-RPC error response 00:12:10.923 response: 00:12:10.923 { 00:12:10.923 "code": -5, 00:12:10.923 "message": "Input/output error" 00:12:10.923 } 00:12:10.923 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:12:10.923 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:10.923 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:10.923 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:10.923 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:12:10.923 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:12:10.923 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:12:11.181 nvme0n1 00:12:11.181 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:12:11.181 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:12:11.181 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:11.440 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:11.440 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:11.440 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:11.701 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --dhchap-key key1 00:12:11.702 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.702 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.702 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.702 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:12:11.702 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:12:11.702 13:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:12:12.639 nvme0n1 00:12:12.639 13:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:12:12.639 13:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:12:12.639 13:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:12.897 13:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:12.897 13:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:12.897 13:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.897 13:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.897 13:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.897 13:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:12:12.897 13:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:12.897 13:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:12:13.155 13:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:13.155 13:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDY0MDZkOGNlMjc4MWQ4ZjYyNTAyOTNjNWUzNGExOTQ2NjI0OGE3ZjZlNGQxOWM2YULh6g==: --dhchap-ctrl-secret DHHC-1:03:M2ZkZjI0Njc3NWQ4YTk5NDkyZGZlZDM1NjIzMDBkY2ZmMzQ2ZTQxZmIxYjQ3NzdiZjY3YTFhNjYxMGQ4YjQ0Y2MZpOQ=: 00:12:13.155 13:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --hostid cfa2def7-c8af-457f-82a0-b312efdea7f4 -l 0 --dhchap-secret DHHC-1:02:ZDY0MDZkOGNlMjc4MWQ4ZjYyNTAyOTNjNWUzNGExOTQ2NjI0OGE3ZjZlNGQxOWM2YULh6g==: --dhchap-ctrl-secret DHHC-1:03:M2ZkZjI0Njc3NWQ4YTk5NDkyZGZlZDM1NjIzMDBkY2ZmMzQ2ZTQxZmIxYjQ3NzdiZjY3YTFhNjYxMGQ4YjQ0Y2MZpOQ=: 00:12:13.723 13:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:12:13.723 13:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:12:13.723 13:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:12:13.723 13:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:12:13.723 13:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:12:13.723 13:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:12:13.723 13:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:12:13.723 13:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:13.723 13:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:13.982 13:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:12:13.982 13:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:12:13.982 13:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:12:13.982 13:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:12:13.982 13:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:13.982 13:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:12:13.982 13:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:13.982 13:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:12:13.982 13:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:12:13.982 13:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:12:14.550 request: 00:12:14.550 { 00:12:14.550 "name": "nvme0", 00:12:14.550 "trtype": "tcp", 00:12:14.550 "traddr": "10.0.0.3", 00:12:14.550 "adrfam": "ipv4", 00:12:14.550 "trsvcid": "4420", 00:12:14.550 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:14.550 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4", 00:12:14.550 "prchk_reftag": false, 00:12:14.550 "prchk_guard": false, 00:12:14.550 "hdgst": false, 00:12:14.550 "ddgst": false, 00:12:14.550 "dhchap_key": "key1", 00:12:14.550 "allow_unrecognized_csi": false, 00:12:14.550 "method": "bdev_nvme_attach_controller", 00:12:14.550 "req_id": 1 00:12:14.550 } 00:12:14.550 Got JSON-RPC error response 00:12:14.550 response: 00:12:14.550 { 00:12:14.550 "code": -5, 00:12:14.550 "message": "Input/output error" 00:12:14.550 } 00:12:14.550 13:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:12:14.550 13:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:14.550 13:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:14.550 13:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:14.550 13:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:14.550 13:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:14.550 13:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:15.498 nvme0n1 00:12:15.498 13:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:12:15.498 13:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:12:15.498 13:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:15.498 13:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:15.498 13:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:15.498 13:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:15.757 13:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:12:15.757 13:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.757 13:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.757 13:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.757 13:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:12:15.757 13:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:12:15.757 13:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:12:16.015 nvme0n1 00:12:16.273 13:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:12:16.273 13:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:16.273 13:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:12:16.531 13:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:16.531 13:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:16.531 13:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:16.531 13:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --dhchap-key key1 --dhchap-ctrlr-key key3 00:12:16.531 13:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.531 13:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.788 13:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.788 13:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:ZjQ2ZDlmZjJmZTk0NGE4YmY4YzZmOTMxYzlmMmI0Y2aDcmP9: '' 2s 00:12:16.788 13:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:12:16.788 13:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:12:16.788 13:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:ZjQ2ZDlmZjJmZTk0NGE4YmY4YzZmOTMxYzlmMmI0Y2aDcmP9: 00:12:16.788 13:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:12:16.788 13:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:12:16.789 13:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:12:16.789 13:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:ZjQ2ZDlmZjJmZTk0NGE4YmY4YzZmOTMxYzlmMmI0Y2aDcmP9: ]] 00:12:16.789 13:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:ZjQ2ZDlmZjJmZTk0NGE4YmY4YzZmOTMxYzlmMmI0Y2aDcmP9: 00:12:16.789 13:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:12:16.789 13:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:12:16.789 13:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:12:18.689 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:12:18.689 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:12:18.689 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:12:18.689 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:12:18.689 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:12:18.689 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:12:18.689 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:12:18.689 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --dhchap-key key1 --dhchap-ctrlr-key key2 00:12:18.689 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.689 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.689 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.689 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:ZDY0MDZkOGNlMjc4MWQ4ZjYyNTAyOTNjNWUzNGExOTQ2NjI0OGE3ZjZlNGQxOWM2YULh6g==: 2s 00:12:18.689 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:12:18.689 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:12:18.689 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:12:18.689 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:ZDY0MDZkOGNlMjc4MWQ4ZjYyNTAyOTNjNWUzNGExOTQ2NjI0OGE3ZjZlNGQxOWM2YULh6g==: 00:12:18.689 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:12:18.689 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:12:18.689 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:12:18.689 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:ZDY0MDZkOGNlMjc4MWQ4ZjYyNTAyOTNjNWUzNGExOTQ2NjI0OGE3ZjZlNGQxOWM2YULh6g==: ]] 00:12:18.689 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:ZDY0MDZkOGNlMjc4MWQ4ZjYyNTAyOTNjNWUzNGExOTQ2NjI0OGE3ZjZlNGQxOWM2YULh6g==: 00:12:18.689 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:12:18.689 13:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:12:20.593 13:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:12:20.852 13:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:12:20.852 13:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:12:20.852 13:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:12:20.852 13:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:12:20.852 13:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:12:20.852 13:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:12:20.853 13:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:20.853 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:20.853 13:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:20.853 13:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.853 13:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.853 13:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.853 13:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:12:20.853 13:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:12:20.853 13:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:12:21.791 nvme0n1 00:12:21.791 13:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:21.791 13:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.791 13:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.791 13:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.791 13:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:21.791 13:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:22.359 13:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:12:22.359 13:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:12:22.359 13:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:22.618 13:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:22.618 13:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:12:22.618 13:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.618 13:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.618 13:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.618 13:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:12:22.618 13:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:12:22.877 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:12:22.877 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:12:22.877 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:23.136 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:23.136 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:23.136 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.136 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.136 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.136 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:12:23.136 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:12:23.136 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:12:23.137 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:12:23.137 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:23.137 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:12:23.137 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:23.137 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:12:23.137 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:12:23.704 request: 00:12:23.704 { 00:12:23.704 "name": "nvme0", 00:12:23.704 "dhchap_key": "key1", 00:12:23.704 "dhchap_ctrlr_key": "key3", 00:12:23.704 "method": "bdev_nvme_set_keys", 00:12:23.704 "req_id": 1 00:12:23.704 } 00:12:23.704 Got JSON-RPC error response 00:12:23.704 response: 00:12:23.704 { 00:12:23.704 "code": -13, 00:12:23.704 "message": "Permission denied" 00:12:23.704 } 00:12:23.704 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:12:23.704 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:23.704 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:23.704 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:23.704 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:12:23.705 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:23.705 13:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:12:23.963 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:12:23.963 13:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:12:24.899 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:12:24.899 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:12:24.899 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:25.158 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:12:25.158 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:25.158 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.158 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.158 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.158 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:12:25.158 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:12:25.158 13:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:12:26.094 nvme0n1 00:12:26.094 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:26.094 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.094 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.094 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.094 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:12:26.094 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:12:26.094 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:12:26.094 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:12:26.094 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:26.094 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:12:26.094 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:26.094 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:12:26.094 13:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:12:26.659 request: 00:12:26.659 { 00:12:26.659 "name": "nvme0", 00:12:26.659 "dhchap_key": "key2", 00:12:26.659 "dhchap_ctrlr_key": "key0", 00:12:26.659 "method": "bdev_nvme_set_keys", 00:12:26.659 "req_id": 1 00:12:26.659 } 00:12:26.659 Got JSON-RPC error response 00:12:26.659 response: 00:12:26.659 { 00:12:26.659 "code": -13, 00:12:26.659 "message": "Permission denied" 00:12:26.659 } 00:12:26.659 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:12:26.659 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:26.659 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:26.659 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:26.659 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:12:26.659 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:26.659 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:12:26.917 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:12:26.917 13:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:12:28.328 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:12:28.328 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:12:28.328 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:28.328 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:12:28.328 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:12:28.328 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:12:28.328 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 67271 00:12:28.328 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 67271 ']' 00:12:28.328 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 67271 00:12:28.328 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:12:28.328 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:28.328 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67271 00:12:28.328 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:28.328 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:28.328 killing process with pid 67271 00:12:28.328 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67271' 00:12:28.328 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 67271 00:12:28.328 13:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 67271 00:12:28.895 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:12:28.895 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:28.895 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:12:28.895 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:28.895 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:12:28.895 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:28.895 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:28.895 rmmod nvme_tcp 00:12:28.895 rmmod nvme_fabrics 00:12:28.895 rmmod nvme_keyring 00:12:28.895 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:28.895 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:12:28.895 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:12:28.895 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 70217 ']' 00:12:28.895 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 70217 00:12:28.895 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 70217 ']' 00:12:28.895 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 70217 00:12:28.895 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:12:28.895 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:28.895 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70217 00:12:28.895 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:28.895 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:28.895 killing process with pid 70217 00:12:28.895 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70217' 00:12:28.895 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 70217 00:12:28.895 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 70217 00:12:29.153 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:29.153 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:29.153 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:29.153 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:12:29.153 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:12:29.153 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:12:29.153 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:29.153 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:29.153 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:29.153 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:29.153 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:29.153 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:29.153 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:29.153 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:29.153 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:29.153 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:29.153 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:29.153 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:29.412 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:29.412 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:29.412 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:29.412 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:29.412 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:29.412 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:29.412 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:29.412 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:29.412 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0 00:12:29.412 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.J6I /tmp/spdk.key-sha256.Abw /tmp/spdk.key-sha384.k4F /tmp/spdk.key-sha512.rcE /tmp/spdk.key-sha512.IVa /tmp/spdk.key-sha384.EPI /tmp/spdk.key-sha256.Lsg '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:12:29.412 00:12:29.412 real 2m59.978s 00:12:29.412 user 7m9.530s 00:12:29.412 sys 0m28.567s 00:12:29.412 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:29.412 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.412 ************************************ 00:12:29.412 END TEST nvmf_auth_target 00:12:29.412 ************************************ 00:12:29.412 13:51:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:12:29.412 13:51:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:12:29.412 13:51:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:29.412 13:51:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:29.412 13:51:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:29.412 ************************************ 00:12:29.412 START TEST nvmf_bdevio_no_huge 00:12:29.412 ************************************ 00:12:29.412 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:12:29.672 * Looking for test storage... 00:12:29.672 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:29.672 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:29.672 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:12:29.672 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:29.672 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:29.672 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:29.672 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:29.672 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:29.672 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:12:29.672 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:12:29.672 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:12:29.672 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:12:29.672 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:12:29.672 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:12:29.672 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:12:29.672 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:29.672 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:12:29.672 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:12:29.672 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:29.672 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:29.672 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:12:29.672 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:12:29.672 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:29.672 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:12:29.672 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:12:29.672 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:12:29.672 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:12:29.672 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:29.672 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:12:29.672 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:12:29.672 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:29.672 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:29.672 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:12:29.672 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:29.672 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:29.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.672 --rc genhtml_branch_coverage=1 00:12:29.672 --rc genhtml_function_coverage=1 00:12:29.672 --rc genhtml_legend=1 00:12:29.672 --rc geninfo_all_blocks=1 00:12:29.672 --rc geninfo_unexecuted_blocks=1 00:12:29.672 00:12:29.672 ' 00:12:29.672 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:29.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.672 --rc genhtml_branch_coverage=1 00:12:29.672 --rc genhtml_function_coverage=1 00:12:29.672 --rc genhtml_legend=1 00:12:29.672 --rc geninfo_all_blocks=1 00:12:29.672 --rc geninfo_unexecuted_blocks=1 00:12:29.672 00:12:29.672 ' 00:12:29.672 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:29.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.672 --rc genhtml_branch_coverage=1 00:12:29.672 --rc genhtml_function_coverage=1 00:12:29.672 --rc genhtml_legend=1 00:12:29.672 --rc geninfo_all_blocks=1 00:12:29.672 --rc geninfo_unexecuted_blocks=1 00:12:29.672 00:12:29.672 ' 00:12:29.672 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:29.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.672 --rc genhtml_branch_coverage=1 00:12:29.672 --rc genhtml_function_coverage=1 00:12:29.672 --rc genhtml_legend=1 00:12:29.672 --rc geninfo_all_blocks=1 00:12:29.672 --rc geninfo_unexecuted_blocks=1 00:12:29.672 00:12:29.672 ' 00:12:29.672 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:29.672 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:12:29.672 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:29.672 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:29.672 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:29.672 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:29.673 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:29.673 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:29.673 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:29.673 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:29.673 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:29.673 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:29.673 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:12:29.673 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=cfa2def7-c8af-457f-82a0-b312efdea7f4 00:12:29.673 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:29.673 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:29.673 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:29.673 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:29.673 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:29.673 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:12:29.673 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:29.673 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:29.673 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:29.673 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.673 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.673 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.673 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:12:29.673 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.673 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:12:29.673 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:29.673 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:29.673 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:29.673 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:29.673 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:29.673 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:29.673 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:29.673 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:29.673 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:29.673 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:29.673 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:29.673 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:29.673 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:12:29.673 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:29.673 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:29.673 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:29.673 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:29.673 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:29.673 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:29.673 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:29.673 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:29.673 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:29.673 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:29.673 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:29.673 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:29.673 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:29.673 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:29.673 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:29.673 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:29.673 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:29.673 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:29.673 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:29.673 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:29.673 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:29.673 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:29.673 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:29.673 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:29.673 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:29.673 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:29.673 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:29.673 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:29.673 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:29.673 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:29.673 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:29.673 Cannot find device "nvmf_init_br" 00:12:29.673 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:12:29.673 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:29.673 Cannot find device "nvmf_init_br2" 00:12:29.673 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:12:29.673 13:51:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:29.673 Cannot find device "nvmf_tgt_br" 00:12:29.673 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 00:12:29.673 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:29.673 Cannot find device "nvmf_tgt_br2" 00:12:29.673 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 00:12:29.673 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:29.673 Cannot find device "nvmf_init_br" 00:12:29.673 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 00:12:29.673 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:29.673 Cannot find device "nvmf_init_br2" 00:12:29.673 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 00:12:29.673 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:29.673 Cannot find device "nvmf_tgt_br" 00:12:29.673 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 00:12:29.673 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:29.673 Cannot find device "nvmf_tgt_br2" 00:12:29.673 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 00:12:29.673 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:29.933 Cannot find device "nvmf_br" 00:12:29.933 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 00:12:29.933 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:29.933 Cannot find device "nvmf_init_if" 00:12:29.933 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true 00:12:29.933 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:29.933 Cannot find device "nvmf_init_if2" 00:12:29.933 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true 00:12:29.933 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:29.933 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:29.933 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true 00:12:29.933 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:29.933 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:29.933 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true 00:12:29.933 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:29.933 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:29.933 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:29.933 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:29.933 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:29.933 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:29.933 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:29.933 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:29.933 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:29.933 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:29.933 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:29.933 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:29.933 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:29.933 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:29.933 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:29.933 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:29.933 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:29.933 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:29.933 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:29.933 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:29.933 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:29.933 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:29.933 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:29.933 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:29.933 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:29.933 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:29.933 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:29.933 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:30.192 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:30.192 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:30.192 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:30.192 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:30.192 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:30.192 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:30.192 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.081 ms 00:12:30.192 00:12:30.192 --- 10.0.0.3 ping statistics --- 00:12:30.192 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:30.192 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:12:30.192 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:30.192 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:30.192 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:12:30.193 00:12:30.193 --- 10.0.0.4 ping statistics --- 00:12:30.193 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:30.193 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:12:30.193 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:30.193 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:30.193 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.048 ms 00:12:30.193 00:12:30.193 --- 10.0.0.1 ping statistics --- 00:12:30.193 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:30.193 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:12:30.193 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:30.193 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:30.193 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:12:30.193 00:12:30.193 --- 10.0.0.2 ping statistics --- 00:12:30.193 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:30.193 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:12:30.193 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:30.193 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@461 -- # return 0 00:12:30.193 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:30.193 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:30.193 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:30.193 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:30.193 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:30.193 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:30.193 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:30.193 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:12:30.193 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:30.193 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:30.193 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:30.193 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=70855 00:12:30.193 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:12:30.193 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 70855 00:12:30.193 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 70855 ']' 00:12:30.193 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:30.193 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:30.193 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:30.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:30.193 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:30.193 13:51:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:30.193 [2024-12-06 13:51:29.459440] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:12:30.193 [2024-12-06 13:51:29.459546] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:12:30.452 [2024-12-06 13:51:29.628078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:30.452 [2024-12-06 13:51:29.712807] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:30.452 [2024-12-06 13:51:29.712868] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:30.452 [2024-12-06 13:51:29.712883] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:30.452 [2024-12-06 13:51:29.712894] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:30.452 [2024-12-06 13:51:29.712903] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:30.452 [2024-12-06 13:51:29.714055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:12:30.452 [2024-12-06 13:51:29.714165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:12:30.452 [2024-12-06 13:51:29.714304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:12:30.452 [2024-12-06 13:51:29.714310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:30.452 [2024-12-06 13:51:29.720766] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:31.387 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:31.388 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:12:31.388 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:31.388 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:31.388 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:31.388 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:31.388 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:31.388 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.388 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:31.388 [2024-12-06 13:51:30.554383] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:31.388 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.388 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:31.388 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.388 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:31.388 Malloc0 00:12:31.388 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.388 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:31.388 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.388 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:31.388 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.388 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:31.388 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.388 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:31.388 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.388 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:31.388 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.388 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:31.388 [2024-12-06 13:51:30.595667] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:31.388 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.388 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:12:31.388 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:12:31.388 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:12:31.388 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:12:31.388 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:31.388 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:31.388 { 00:12:31.388 "params": { 00:12:31.388 "name": "Nvme$subsystem", 00:12:31.388 "trtype": "$TEST_TRANSPORT", 00:12:31.388 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:31.388 "adrfam": "ipv4", 00:12:31.388 "trsvcid": "$NVMF_PORT", 00:12:31.388 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:31.388 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:31.388 "hdgst": ${hdgst:-false}, 00:12:31.388 "ddgst": ${ddgst:-false} 00:12:31.388 }, 00:12:31.388 "method": "bdev_nvme_attach_controller" 00:12:31.388 } 00:12:31.388 EOF 00:12:31.388 )") 00:12:31.388 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:12:31.388 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:12:31.388 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:12:31.388 13:51:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:31.388 "params": { 00:12:31.388 "name": "Nvme1", 00:12:31.388 "trtype": "tcp", 00:12:31.388 "traddr": "10.0.0.3", 00:12:31.388 "adrfam": "ipv4", 00:12:31.388 "trsvcid": "4420", 00:12:31.388 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:31.388 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:31.388 "hdgst": false, 00:12:31.388 "ddgst": false 00:12:31.388 }, 00:12:31.388 "method": "bdev_nvme_attach_controller" 00:12:31.388 }' 00:12:31.388 [2024-12-06 13:51:30.648264] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:12:31.388 [2024-12-06 13:51:30.648346] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid70891 ] 00:12:31.646 [2024-12-06 13:51:30.794967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:31.646 [2024-12-06 13:51:30.883137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:31.646 [2024-12-06 13:51:30.883294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:31.646 [2024-12-06 13:51:30.883309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:31.646 [2024-12-06 13:51:30.897673] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:31.904 I/O targets: 00:12:31.904 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:12:31.904 00:12:31.904 00:12:31.904 CUnit - A unit testing framework for C - Version 2.1-3 00:12:31.904 http://cunit.sourceforge.net/ 00:12:31.904 00:12:31.904 00:12:31.904 Suite: bdevio tests on: Nvme1n1 00:12:31.904 Test: blockdev write read block ...passed 00:12:31.904 Test: blockdev write zeroes read block ...passed 00:12:31.904 Test: blockdev write zeroes read no split ...passed 00:12:31.904 Test: blockdev write zeroes read split ...passed 00:12:31.904 Test: blockdev write zeroes read split partial ...passed 00:12:31.904 Test: blockdev reset ...[2024-12-06 13:51:31.150774] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:12:31.904 [2024-12-06 13:51:31.150896] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e45720 (9): Bad file descriptor 00:12:31.904 passed 00:12:31.904 Test: blockdev write read 8 blocks ...[2024-12-06 13:51:31.168030] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:12:31.904 passed 00:12:31.904 Test: blockdev write read size > 128k ...passed 00:12:31.904 Test: blockdev write read invalid size ...passed 00:12:31.904 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:31.904 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:31.904 Test: blockdev write read max offset ...passed 00:12:31.904 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:31.904 Test: blockdev writev readv 8 blocks ...passed 00:12:31.904 Test: blockdev writev readv 30 x 1block ...passed 00:12:31.904 Test: blockdev writev readv block ...passed 00:12:31.904 Test: blockdev writev readv size > 128k ...passed 00:12:31.904 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:31.904 Test: blockdev comparev and writev ...[2024-12-06 13:51:31.176420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:31.904 [2024-12-06 13:51:31.176485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:12:31.904 [2024-12-06 13:51:31.176503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:31.904 [2024-12-06 13:51:31.176513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:12:31.904 [2024-12-06 13:51:31.176936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:31.904 [2024-12-06 13:51:31.176957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:12:31.904 [2024-12-06 13:51:31.176973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:31.904 [2024-12-06 13:51:31.176982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:12:31.904 [2024-12-06 13:51:31.177316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:31.904 [2024-12-06 13:51:31.177333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:12:31.904 [2024-12-06 13:51:31.177348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:31.904 [2024-12-06 13:51:31.177358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:12:31.904 [2024-12-06 13:51:31.177634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:31.904 [2024-12-06 13:51:31.177654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:12:31.904 [2024-12-06 13:51:31.177670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:31.904 [2024-12-06 13:51:31.177679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:12:31.904 passed 00:12:31.904 Test: blockdev nvme passthru rw ...passed 00:12:31.904 Test: blockdev nvme passthru vendor specific ...[2024-12-06 13:51:31.178499] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:31.904 [2024-12-06 13:51:31.178522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:12:31.904 [2024-12-06 13:51:31.178628] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:31.904 [2024-12-06 13:51:31.178643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:12:31.904 [2024-12-06 13:51:31.178762] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:31.904 [2024-12-06 13:51:31.178776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:12:31.904 [2024-12-06 13:51:31.178891] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:31.904 [2024-12-06 13:51:31.178905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:12:31.904 passed 00:12:31.904 Test: blockdev nvme admin passthru ...passed 00:12:31.904 Test: blockdev copy ...passed 00:12:31.904 00:12:31.904 Run Summary: Type Total Ran Passed Failed Inactive 00:12:31.904 suites 1 1 n/a 0 0 00:12:31.904 tests 23 23 23 0 0 00:12:31.904 asserts 152 152 152 0 n/a 00:12:31.904 00:12:31.904 Elapsed time = 0.167 seconds 00:12:32.162 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:32.162 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.162 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:32.162 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.162 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:12:32.162 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:12:32.162 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:32.162 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:12:32.467 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:32.467 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:12:32.467 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:32.467 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:32.467 rmmod nvme_tcp 00:12:32.467 rmmod nvme_fabrics 00:12:32.467 rmmod nvme_keyring 00:12:32.467 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:32.467 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:12:32.467 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:12:32.467 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 70855 ']' 00:12:32.467 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 70855 00:12:32.467 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 70855 ']' 00:12:32.467 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 70855 00:12:32.467 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:12:32.467 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:32.467 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70855 00:12:32.467 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:12:32.467 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:12:32.467 killing process with pid 70855 00:12:32.467 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70855' 00:12:32.467 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 70855 00:12:32.467 13:51:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 70855 00:12:32.724 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:32.724 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:32.724 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:32.724 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:12:32.724 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:12:32.724 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:12:32.724 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:32.724 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:32.724 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:32.982 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:32.982 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:32.982 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:32.982 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:32.982 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:32.982 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:32.982 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:32.982 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:32.982 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:32.982 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:32.982 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:32.982 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:32.982 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:32.982 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:32.982 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:32.982 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:32.982 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:32.982 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0 00:12:32.982 00:12:32.982 real 0m3.634s 00:12:32.982 user 0m10.997s 00:12:32.982 sys 0m1.505s 00:12:32.982 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:32.982 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:32.982 ************************************ 00:12:32.982 END TEST nvmf_bdevio_no_huge 00:12:32.982 ************************************ 00:12:33.242 13:51:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:12:33.242 13:51:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:33.242 13:51:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:33.242 13:51:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:33.242 ************************************ 00:12:33.242 START TEST nvmf_tls 00:12:33.242 ************************************ 00:12:33.242 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:12:33.242 * Looking for test storage... 00:12:33.242 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:33.242 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:33.242 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:12:33.242 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:33.242 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:33.242 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:33.242 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:33.242 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:33.242 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:12:33.242 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:12:33.242 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:12:33.242 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:12:33.242 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:12:33.242 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:12:33.242 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:12:33.242 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:33.242 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:12:33.242 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:12:33.242 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:33.242 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:33.242 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:12:33.242 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:12:33.242 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:33.242 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:12:33.242 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:12:33.242 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:12:33.242 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:12:33.242 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:33.242 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:12:33.242 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:12:33.242 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:33.242 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:33.242 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:12:33.242 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:33.242 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:33.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:33.242 --rc genhtml_branch_coverage=1 00:12:33.242 --rc genhtml_function_coverage=1 00:12:33.242 --rc genhtml_legend=1 00:12:33.242 --rc geninfo_all_blocks=1 00:12:33.242 --rc geninfo_unexecuted_blocks=1 00:12:33.242 00:12:33.242 ' 00:12:33.242 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:33.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:33.242 --rc genhtml_branch_coverage=1 00:12:33.242 --rc genhtml_function_coverage=1 00:12:33.242 --rc genhtml_legend=1 00:12:33.242 --rc geninfo_all_blocks=1 00:12:33.242 --rc geninfo_unexecuted_blocks=1 00:12:33.242 00:12:33.242 ' 00:12:33.242 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:33.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:33.242 --rc genhtml_branch_coverage=1 00:12:33.242 --rc genhtml_function_coverage=1 00:12:33.242 --rc genhtml_legend=1 00:12:33.242 --rc geninfo_all_blocks=1 00:12:33.242 --rc geninfo_unexecuted_blocks=1 00:12:33.242 00:12:33.242 ' 00:12:33.242 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:33.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:33.242 --rc genhtml_branch_coverage=1 00:12:33.242 --rc genhtml_function_coverage=1 00:12:33.242 --rc genhtml_legend=1 00:12:33.242 --rc geninfo_all_blocks=1 00:12:33.242 --rc geninfo_unexecuted_blocks=1 00:12:33.242 00:12:33.242 ' 00:12:33.242 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:33.242 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:12:33.242 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:33.242 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:33.242 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:33.242 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:33.242 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:33.242 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:33.242 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:33.242 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:33.242 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:33.243 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:33.243 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:12:33.243 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=cfa2def7-c8af-457f-82a0-b312efdea7f4 00:12:33.243 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:33.243 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:33.243 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:33.243 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:33.243 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:33.243 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:12:33.243 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:33.243 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:33.243 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:33.243 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.243 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.243 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.243 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:12:33.243 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.243 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:12:33.243 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:33.243 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:33.243 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:33.243 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:33.243 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:33.243 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:33.243 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:33.243 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:33.243 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:33.243 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:33.243 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:33.243 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:12:33.243 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:33.243 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:33.243 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:33.243 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:33.243 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:33.243 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:33.243 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:33.243 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:33.501 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:33.501 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:33.501 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:33.501 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:33.501 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:33.501 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:33.501 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:33.501 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:33.501 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:33.501 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:33.501 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:33.501 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:33.502 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:33.502 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:33.502 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:33.502 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:33.502 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:33.502 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:33.502 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:33.502 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:33.502 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:33.502 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:33.502 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:33.502 Cannot find device "nvmf_init_br" 00:12:33.502 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:12:33.502 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:33.502 Cannot find device "nvmf_init_br2" 00:12:33.502 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:12:33.502 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:33.502 Cannot find device "nvmf_tgt_br" 00:12:33.502 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 00:12:33.502 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:33.502 Cannot find device "nvmf_tgt_br2" 00:12:33.502 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 00:12:33.502 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:33.502 Cannot find device "nvmf_init_br" 00:12:33.502 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 00:12:33.502 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:33.502 Cannot find device "nvmf_init_br2" 00:12:33.502 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 00:12:33.502 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:33.502 Cannot find device "nvmf_tgt_br" 00:12:33.502 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 00:12:33.502 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:33.502 Cannot find device "nvmf_tgt_br2" 00:12:33.502 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 00:12:33.502 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:33.502 Cannot find device "nvmf_br" 00:12:33.502 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 00:12:33.502 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:33.502 Cannot find device "nvmf_init_if" 00:12:33.502 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true 00:12:33.502 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:33.502 Cannot find device "nvmf_init_if2" 00:12:33.502 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true 00:12:33.502 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:33.502 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:33.502 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true 00:12:33.502 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:33.502 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:33.502 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true 00:12:33.502 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:33.502 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:33.502 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:33.502 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:33.502 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:33.502 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:33.502 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:33.502 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:33.502 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:33.502 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:33.502 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:33.502 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:33.502 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:33.502 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:33.502 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:33.502 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:33.502 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:33.502 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:33.502 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:33.502 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:33.760 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:33.760 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:33.760 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:33.760 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:33.760 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:33.760 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:33.760 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:33.760 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:33.760 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:33.760 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:33.760 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:33.760 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:33.760 13:51:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:33.760 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:33.760 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.097 ms 00:12:33.760 00:12:33.760 --- 10.0.0.3 ping statistics --- 00:12:33.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:33.760 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:12:33.760 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:33.760 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:33.760 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:12:33.760 00:12:33.760 --- 10.0.0.4 ping statistics --- 00:12:33.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:33.760 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:12:33.760 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:33.760 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:33.760 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:12:33.760 00:12:33.760 --- 10.0.0.1 ping statistics --- 00:12:33.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:33.760 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:12:33.760 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:33.760 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:33.760 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:12:33.760 00:12:33.760 --- 10.0.0.2 ping statistics --- 00:12:33.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:33.760 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:12:33.760 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:33.760 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@461 -- # return 0 00:12:33.760 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:33.760 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:33.760 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:33.760 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:33.760 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:33.760 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:33.760 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:33.760 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:12:33.760 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:33.760 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:33.760 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:33.760 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71128 00:12:33.760 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71128 00:12:33.760 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:12:33.760 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71128 ']' 00:12:33.760 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:33.760 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:33.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:33.760 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:33.760 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:33.760 13:51:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:33.760 [2024-12-06 13:51:33.116771] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:12:33.760 [2024-12-06 13:51:33.116866] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:34.018 [2024-12-06 13:51:33.277393] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:34.018 [2024-12-06 13:51:33.349266] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:34.018 [2024-12-06 13:51:33.349330] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:34.018 [2024-12-06 13:51:33.349344] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:34.018 [2024-12-06 13:51:33.349355] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:34.018 [2024-12-06 13:51:33.349364] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:34.018 [2024-12-06 13:51:33.349866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:34.953 13:51:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:34.953 13:51:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:12:34.953 13:51:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:34.953 13:51:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:34.953 13:51:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:34.953 13:51:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:34.953 13:51:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:12:34.953 13:51:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:12:35.212 true 00:12:35.212 13:51:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:35.212 13:51:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:12:35.471 13:51:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:12:35.471 13:51:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:12:35.471 13:51:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:12:35.730 13:51:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:12:35.730 13:51:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:35.988 13:51:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:12:35.988 13:51:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:12:35.988 13:51:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:12:36.246 13:51:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:12:36.246 13:51:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:36.505 13:51:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:12:36.505 13:51:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:12:36.505 13:51:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:12:36.505 13:51:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:36.763 13:51:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:12:36.763 13:51:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:12:36.763 13:51:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:12:37.021 13:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:12:37.021 13:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:37.280 13:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:12:37.280 13:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:12:37.280 13:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:12:37.538 13:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:37.538 13:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:12:37.538 13:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:12:37.538 13:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:12:37.538 13:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:12:37.538 13:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:12:37.538 13:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:12:37.538 13:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:12:37.538 13:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:12:37.538 13:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:12:37.538 13:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:12:37.797 13:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:12:37.797 13:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:12:37.797 13:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:12:37.797 13:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:12:37.797 13:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:12:37.797 13:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:12:37.797 13:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:12:37.797 13:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:12:37.797 13:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:12:37.797 13:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:12:37.797 13:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.xy9RQ1u4IV 00:12:37.797 13:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:12:37.797 13:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.2BEpBxTQwp 00:12:37.797 13:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:12:37.797 13:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:12:37.797 13:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.xy9RQ1u4IV 00:12:37.797 13:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.2BEpBxTQwp 00:12:37.797 13:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:12:38.055 13:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:12:38.315 [2024-12-06 13:51:37.553327] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:38.315 13:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.xy9RQ1u4IV 00:12:38.315 13:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.xy9RQ1u4IV 00:12:38.315 13:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:12:38.574 [2024-12-06 13:51:37.831611] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:38.574 13:51:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:12:38.836 13:51:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:12:39.100 [2024-12-06 13:51:38.267822] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:39.100 [2024-12-06 13:51:38.268226] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:39.100 13:51:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:12:39.100 malloc0 00:12:39.358 13:51:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:39.358 13:51:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.xy9RQ1u4IV 00:12:39.616 13:51:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:12:39.875 13:51:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.xy9RQ1u4IV 00:12:52.077 Initializing NVMe Controllers 00:12:52.078 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:12:52.078 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:52.078 Initialization complete. Launching workers. 00:12:52.078 ======================================================== 00:12:52.078 Latency(us) 00:12:52.078 Device Information : IOPS MiB/s Average min max 00:12:52.078 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11399.38 44.53 5615.50 960.34 7531.81 00:12:52.078 ======================================================== 00:12:52.078 Total : 11399.38 44.53 5615.50 960.34 7531.81 00:12:52.078 00:12:52.078 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.xy9RQ1u4IV 00:12:52.078 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:52.078 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:52.078 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:52.078 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.xy9RQ1u4IV 00:12:52.078 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:52.078 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71363 00:12:52.078 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:52.078 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:52.078 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71363 /var/tmp/bdevperf.sock 00:12:52.078 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71363 ']' 00:12:52.078 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:52.078 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:52.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:52.078 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:52.078 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:52.078 13:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:52.078 [2024-12-06 13:51:49.470579] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:12:52.078 [2024-12-06 13:51:49.470684] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71363 ] 00:12:52.078 [2024-12-06 13:51:49.621509] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:52.078 [2024-12-06 13:51:49.690865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:52.078 [2024-12-06 13:51:49.764172] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:52.078 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:52.078 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:12:52.078 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.xy9RQ1u4IV 00:12:52.078 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:12:52.078 [2024-12-06 13:51:50.880440] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:52.078 TLSTESTn1 00:12:52.078 13:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:12:52.078 Running I/O for 10 seconds... 00:12:53.950 4608.00 IOPS, 18.00 MiB/s [2024-12-06T13:51:54.286Z] 4709.00 IOPS, 18.39 MiB/s [2024-12-06T13:51:55.218Z] 4793.33 IOPS, 18.72 MiB/s [2024-12-06T13:51:56.169Z] 4821.25 IOPS, 18.83 MiB/s [2024-12-06T13:51:57.105Z] 4825.40 IOPS, 18.85 MiB/s [2024-12-06T13:51:58.478Z] 4838.00 IOPS, 18.90 MiB/s [2024-12-06T13:51:59.413Z] 4845.71 IOPS, 18.93 MiB/s [2024-12-06T13:52:00.347Z] 4852.62 IOPS, 18.96 MiB/s [2024-12-06T13:52:01.359Z] 4863.33 IOPS, 19.00 MiB/s [2024-12-06T13:52:01.359Z] 4865.70 IOPS, 19.01 MiB/s 00:13:01.955 Latency(us) 00:13:01.955 [2024-12-06T13:52:01.359Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:01.955 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:13:01.955 Verification LBA range: start 0x0 length 0x2000 00:13:01.955 TLSTESTn1 : 10.01 4871.19 19.03 0.00 0.00 26230.88 5332.25 20256.58 00:13:01.955 [2024-12-06T13:52:01.359Z] =================================================================================================================== 00:13:01.955 [2024-12-06T13:52:01.359Z] Total : 4871.19 19.03 0.00 0.00 26230.88 5332.25 20256.58 00:13:01.955 { 00:13:01.955 "results": [ 00:13:01.955 { 00:13:01.955 "job": "TLSTESTn1", 00:13:01.955 "core_mask": "0x4", 00:13:01.955 "workload": "verify", 00:13:01.955 "status": "finished", 00:13:01.955 "verify_range": { 00:13:01.955 "start": 0, 00:13:01.955 "length": 8192 00:13:01.955 }, 00:13:01.955 "queue_depth": 128, 00:13:01.955 "io_size": 4096, 00:13:01.955 "runtime": 10.014593, 00:13:01.955 "iops": 4871.191470287409, 00:13:01.955 "mibps": 19.028091680810192, 00:13:01.955 "io_failed": 0, 00:13:01.955 "io_timeout": 0, 00:13:01.955 "avg_latency_us": 26230.881000944817, 00:13:01.955 "min_latency_us": 5332.2472727272725, 00:13:01.955 "max_latency_us": 20256.581818181818 00:13:01.955 } 00:13:01.955 ], 00:13:01.955 "core_count": 1 00:13:01.955 } 00:13:01.955 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:01.955 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 71363 00:13:01.955 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71363 ']' 00:13:01.955 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71363 00:13:01.955 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:01.955 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:01.955 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71363 00:13:01.955 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:13:01.955 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:13:01.955 killing process with pid 71363 00:13:01.955 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71363' 00:13:01.955 Received shutdown signal, test time was about 10.000000 seconds 00:13:01.955 00:13:01.955 Latency(us) 00:13:01.955 [2024-12-06T13:52:01.359Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:01.955 [2024-12-06T13:52:01.359Z] =================================================================================================================== 00:13:01.955 [2024-12-06T13:52:01.359Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:01.955 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71363 00:13:01.956 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71363 00:13:02.214 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.2BEpBxTQwp 00:13:02.214 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:13:02.214 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.2BEpBxTQwp 00:13:02.214 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:13:02.214 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:02.214 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:13:02.214 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:02.214 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.2BEpBxTQwp 00:13:02.214 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:02.214 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:02.214 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:02.214 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.2BEpBxTQwp 00:13:02.214 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:02.214 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:02.214 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71497 00:13:02.214 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:02.214 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71497 /var/tmp/bdevperf.sock 00:13:02.214 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71497 ']' 00:13:02.214 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:02.214 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:02.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:02.214 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:02.214 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:02.214 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:02.214 [2024-12-06 13:52:01.449899] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:13:02.214 [2024-12-06 13:52:01.449991] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71497 ] 00:13:02.214 [2024-12-06 13:52:01.588063] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:02.472 [2024-12-06 13:52:01.641377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:02.472 [2024-12-06 13:52:01.710484] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:02.472 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:02.472 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:02.472 13:52:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.2BEpBxTQwp 00:13:02.730 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:02.989 [2024-12-06 13:52:02.300520] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:02.989 [2024-12-06 13:52:02.305353] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:02.989 [2024-12-06 13:52:02.305992] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e86150 (107): Transport endpoint is not connected 00:13:02.989 [2024-12-06 13:52:02.306980] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e86150 (9): Bad file descriptor 00:13:02.989 [2024-12-06 13:52:02.307978] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:13:02.989 [2024-12-06 13:52:02.308179] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:13:02.989 [2024-12-06 13:52:02.308194] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:13:02.989 [2024-12-06 13:52:02.308212] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:13:02.989 request: 00:13:02.989 { 00:13:02.989 "name": "TLSTEST", 00:13:02.989 "trtype": "tcp", 00:13:02.989 "traddr": "10.0.0.3", 00:13:02.989 "adrfam": "ipv4", 00:13:02.989 "trsvcid": "4420", 00:13:02.989 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:02.989 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:02.989 "prchk_reftag": false, 00:13:02.989 "prchk_guard": false, 00:13:02.989 "hdgst": false, 00:13:02.989 "ddgst": false, 00:13:02.989 "psk": "key0", 00:13:02.989 "allow_unrecognized_csi": false, 00:13:02.989 "method": "bdev_nvme_attach_controller", 00:13:02.989 "req_id": 1 00:13:02.989 } 00:13:02.989 Got JSON-RPC error response 00:13:02.989 response: 00:13:02.989 { 00:13:02.989 "code": -5, 00:13:02.989 "message": "Input/output error" 00:13:02.989 } 00:13:02.989 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71497 00:13:02.989 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71497 ']' 00:13:02.989 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71497 00:13:02.989 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:02.989 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:02.989 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71497 00:13:02.989 killing process with pid 71497 00:13:02.990 Received shutdown signal, test time was about 10.000000 seconds 00:13:02.990 00:13:02.990 Latency(us) 00:13:02.990 [2024-12-06T13:52:02.394Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:02.990 [2024-12-06T13:52:02.394Z] =================================================================================================================== 00:13:02.990 [2024-12-06T13:52:02.394Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:02.990 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:13:02.990 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:13:02.990 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71497' 00:13:02.990 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71497 00:13:02.990 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71497 00:13:03.248 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:13:03.249 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:13:03.249 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:03.249 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:03.249 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:03.249 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.xy9RQ1u4IV 00:13:03.249 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:13:03.249 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.xy9RQ1u4IV 00:13:03.249 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:13:03.249 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:03.249 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:13:03.249 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:03.249 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.xy9RQ1u4IV 00:13:03.249 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:03.249 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:03.249 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:13:03.249 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.xy9RQ1u4IV 00:13:03.249 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:03.249 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71524 00:13:03.249 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:03.249 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:03.249 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71524 /var/tmp/bdevperf.sock 00:13:03.249 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71524 ']' 00:13:03.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:03.249 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:03.249 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:03.249 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:03.249 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:03.249 13:52:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:03.508 [2024-12-06 13:52:02.669070] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:13:03.508 [2024-12-06 13:52:02.669184] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71524 ] 00:13:03.508 [2024-12-06 13:52:02.814478] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:03.508 [2024-12-06 13:52:02.869907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:03.766 [2024-12-06 13:52:02.938691] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:04.334 13:52:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:04.334 13:52:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:04.334 13:52:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.xy9RQ1u4IV 00:13:04.592 13:52:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:13:04.852 [2024-12-06 13:52:04.072024] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:04.852 [2024-12-06 13:52:04.082878] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:13:04.852 [2024-12-06 13:52:04.082918] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:13:04.852 [2024-12-06 13:52:04.082980] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:04.852 [2024-12-06 13:52:04.083830] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x929150 (107): Transport endpoint is not connected 00:13:04.852 [2024-12-06 13:52:04.084823] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x929150 (9): Bad file descriptor 00:13:04.852 [2024-12-06 13:52:04.085819] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:13:04.852 [2024-12-06 13:52:04.085841] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:13:04.852 [2024-12-06 13:52:04.085866] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:13:04.852 [2024-12-06 13:52:04.085896] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:13:04.852 request: 00:13:04.852 { 00:13:04.852 "name": "TLSTEST", 00:13:04.852 "trtype": "tcp", 00:13:04.852 "traddr": "10.0.0.3", 00:13:04.852 "adrfam": "ipv4", 00:13:04.852 "trsvcid": "4420", 00:13:04.852 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:04.852 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:13:04.852 "prchk_reftag": false, 00:13:04.852 "prchk_guard": false, 00:13:04.852 "hdgst": false, 00:13:04.852 "ddgst": false, 00:13:04.852 "psk": "key0", 00:13:04.852 "allow_unrecognized_csi": false, 00:13:04.852 "method": "bdev_nvme_attach_controller", 00:13:04.852 "req_id": 1 00:13:04.852 } 00:13:04.852 Got JSON-RPC error response 00:13:04.852 response: 00:13:04.852 { 00:13:04.852 "code": -5, 00:13:04.852 "message": "Input/output error" 00:13:04.852 } 00:13:04.852 13:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71524 00:13:04.852 13:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71524 ']' 00:13:04.852 13:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71524 00:13:04.852 13:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:04.852 13:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:04.852 13:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71524 00:13:04.852 13:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:13:04.852 13:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:13:04.852 13:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71524' 00:13:04.852 killing process with pid 71524 00:13:04.852 13:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71524 00:13:04.852 Received shutdown signal, test time was about 10.000000 seconds 00:13:04.852 00:13:04.852 Latency(us) 00:13:04.852 [2024-12-06T13:52:04.256Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:04.852 [2024-12-06T13:52:04.256Z] =================================================================================================================== 00:13:04.852 [2024-12-06T13:52:04.256Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:04.852 13:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71524 00:13:05.111 13:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:13:05.111 13:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:13:05.111 13:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:05.111 13:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:05.111 13:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:05.111 13:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.xy9RQ1u4IV 00:13:05.111 13:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:13:05.111 13:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.xy9RQ1u4IV 00:13:05.111 13:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:13:05.111 13:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:05.111 13:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:13:05.111 13:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:05.111 13:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.xy9RQ1u4IV 00:13:05.111 13:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:05.111 13:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:13:05.111 13:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:05.111 13:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.xy9RQ1u4IV 00:13:05.111 13:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:05.111 13:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71551 00:13:05.111 13:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:05.111 13:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:05.111 13:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71551 /var/tmp/bdevperf.sock 00:13:05.111 13:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71551 ']' 00:13:05.111 13:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:05.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:05.111 13:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:05.111 13:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:05.111 13:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:05.111 13:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:05.111 [2024-12-06 13:52:04.434605] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:13:05.111 [2024-12-06 13:52:04.434718] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71551 ] 00:13:05.370 [2024-12-06 13:52:04.571383] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:05.370 [2024-12-06 13:52:04.632165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:05.370 [2024-12-06 13:52:04.701241] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:06.323 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:06.323 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:06.323 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.xy9RQ1u4IV 00:13:06.323 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:06.582 [2024-12-06 13:52:05.799191] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:06.582 [2024-12-06 13:52:05.804038] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:13:06.582 [2024-12-06 13:52:05.804076] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:13:06.582 [2024-12-06 13:52:05.804169] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:06.582 [2024-12-06 13:52:05.804776] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2247150 (107): Transport endpoint is not connected 00:13:06.582 [2024-12-06 13:52:05.805764] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2247150 (9): Bad file descriptor 00:13:06.582 [2024-12-06 13:52:05.806762] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:13:06.582 [2024-12-06 13:52:05.806934] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:13:06.582 [2024-12-06 13:52:05.806966] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:13:06.582 [2024-12-06 13:52:05.806984] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:13:06.582 request: 00:13:06.582 { 00:13:06.582 "name": "TLSTEST", 00:13:06.582 "trtype": "tcp", 00:13:06.582 "traddr": "10.0.0.3", 00:13:06.582 "adrfam": "ipv4", 00:13:06.582 "trsvcid": "4420", 00:13:06.582 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:13:06.582 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:06.582 "prchk_reftag": false, 00:13:06.582 "prchk_guard": false, 00:13:06.582 "hdgst": false, 00:13:06.582 "ddgst": false, 00:13:06.582 "psk": "key0", 00:13:06.582 "allow_unrecognized_csi": false, 00:13:06.582 "method": "bdev_nvme_attach_controller", 00:13:06.582 "req_id": 1 00:13:06.582 } 00:13:06.582 Got JSON-RPC error response 00:13:06.582 response: 00:13:06.582 { 00:13:06.582 "code": -5, 00:13:06.582 "message": "Input/output error" 00:13:06.582 } 00:13:06.582 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71551 00:13:06.582 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71551 ']' 00:13:06.582 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71551 00:13:06.582 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:06.582 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:06.582 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71551 00:13:06.582 killing process with pid 71551 00:13:06.582 Received shutdown signal, test time was about 10.000000 seconds 00:13:06.582 00:13:06.582 Latency(us) 00:13:06.582 [2024-12-06T13:52:05.986Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:06.582 [2024-12-06T13:52:05.986Z] =================================================================================================================== 00:13:06.582 [2024-12-06T13:52:05.986Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:06.582 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:13:06.582 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:13:06.582 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71551' 00:13:06.582 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71551 00:13:06.582 13:52:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71551 00:13:06.840 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:13:06.840 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:13:06.840 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:06.840 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:06.841 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:06.841 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:06.841 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:13:06.841 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:06.841 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:13:06.841 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:06.841 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:13:06.841 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:06.841 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:06.841 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:06.841 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:06.841 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:06.841 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:13:06.841 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:06.841 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71581 00:13:06.841 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:06.841 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:06.841 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71581 /var/tmp/bdevperf.sock 00:13:06.841 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71581 ']' 00:13:06.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:06.841 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:06.841 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:06.841 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:06.841 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:06.841 13:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:06.841 [2024-12-06 13:52:06.163338] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:13:06.841 [2024-12-06 13:52:06.163443] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71581 ] 00:13:07.099 [2024-12-06 13:52:06.308092] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:07.099 [2024-12-06 13:52:06.360878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:07.099 [2024-12-06 13:52:06.431021] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:08.035 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:08.035 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:08.035 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:13:08.035 [2024-12-06 13:52:07.352083] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:13:08.035 [2024-12-06 13:52:07.352152] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:13:08.035 request: 00:13:08.035 { 00:13:08.035 "name": "key0", 00:13:08.035 "path": "", 00:13:08.035 "method": "keyring_file_add_key", 00:13:08.035 "req_id": 1 00:13:08.035 } 00:13:08.035 Got JSON-RPC error response 00:13:08.035 response: 00:13:08.035 { 00:13:08.035 "code": -1, 00:13:08.035 "message": "Operation not permitted" 00:13:08.035 } 00:13:08.035 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:08.295 [2024-12-06 13:52:07.616284] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:08.295 [2024-12-06 13:52:07.616533] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:13:08.295 request: 00:13:08.295 { 00:13:08.295 "name": "TLSTEST", 00:13:08.295 "trtype": "tcp", 00:13:08.295 "traddr": "10.0.0.3", 00:13:08.295 "adrfam": "ipv4", 00:13:08.295 "trsvcid": "4420", 00:13:08.295 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:08.295 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:08.295 "prchk_reftag": false, 00:13:08.295 "prchk_guard": false, 00:13:08.295 "hdgst": false, 00:13:08.295 "ddgst": false, 00:13:08.295 "psk": "key0", 00:13:08.295 "allow_unrecognized_csi": false, 00:13:08.295 "method": "bdev_nvme_attach_controller", 00:13:08.295 "req_id": 1 00:13:08.295 } 00:13:08.295 Got JSON-RPC error response 00:13:08.295 response: 00:13:08.295 { 00:13:08.295 "code": -126, 00:13:08.295 "message": "Required key not available" 00:13:08.295 } 00:13:08.295 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71581 00:13:08.295 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71581 ']' 00:13:08.295 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71581 00:13:08.295 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:08.295 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:08.295 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71581 00:13:08.295 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:13:08.295 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:13:08.295 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71581' 00:13:08.295 killing process with pid 71581 00:13:08.295 Received shutdown signal, test time was about 10.000000 seconds 00:13:08.295 00:13:08.295 Latency(us) 00:13:08.295 [2024-12-06T13:52:07.699Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:08.295 [2024-12-06T13:52:07.699Z] =================================================================================================================== 00:13:08.295 [2024-12-06T13:52:07.699Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:08.295 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71581 00:13:08.295 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71581 00:13:08.554 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:13:08.554 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:13:08.554 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:08.554 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:08.554 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:08.554 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 71128 00:13:08.554 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71128 ']' 00:13:08.554 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71128 00:13:08.554 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:08.554 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:08.554 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71128 00:13:08.554 killing process with pid 71128 00:13:08.554 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:08.554 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:08.554 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71128' 00:13:08.554 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71128 00:13:08.554 13:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71128 00:13:08.813 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:13:08.813 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:13:08.813 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:13:08.813 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:13:08.813 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:13:08.813 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:13:08.813 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:13:09.073 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:13:09.073 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:13:09.073 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.4sBmVr7jSK 00:13:09.073 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:13:09.073 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.4sBmVr7jSK 00:13:09.073 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:13:09.073 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:09.073 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:09.073 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:09.073 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71625 00:13:09.073 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71625 00:13:09.073 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:09.073 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71625 ']' 00:13:09.073 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:09.073 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:09.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:09.073 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:09.073 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:09.073 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:09.073 [2024-12-06 13:52:08.328585] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:13:09.073 [2024-12-06 13:52:08.328658] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:09.073 [2024-12-06 13:52:08.470281] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:09.333 [2024-12-06 13:52:08.519659] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:09.333 [2024-12-06 13:52:08.520025] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:09.333 [2024-12-06 13:52:08.520059] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:09.333 [2024-12-06 13:52:08.520067] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:09.333 [2024-12-06 13:52:08.520074] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:09.333 [2024-12-06 13:52:08.520553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:09.333 [2024-12-06 13:52:08.588693] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:09.333 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:09.333 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:09.333 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:09.333 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:09.333 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:09.333 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:09.333 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.4sBmVr7jSK 00:13:09.333 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.4sBmVr7jSK 00:13:09.333 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:09.592 [2024-12-06 13:52:08.907753] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:09.592 13:52:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:09.851 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:13:10.110 [2024-12-06 13:52:09.475858] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:10.110 [2024-12-06 13:52:09.476295] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:10.110 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:10.369 malloc0 00:13:10.369 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:10.628 13:52:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.4sBmVr7jSK 00:13:10.888 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:13:11.148 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.4sBmVr7jSK 00:13:11.148 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:11.148 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:11.148 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:11.148 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.4sBmVr7jSK 00:13:11.148 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:11.148 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:11.148 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71673 00:13:11.148 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:11.148 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71673 /var/tmp/bdevperf.sock 00:13:11.148 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71673 ']' 00:13:11.148 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:11.148 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:11.148 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:11.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:11.148 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:11.148 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:11.148 [2024-12-06 13:52:10.394183] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:13:11.148 [2024-12-06 13:52:10.394270] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71673 ] 00:13:11.148 [2024-12-06 13:52:10.543044] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:11.407 [2024-12-06 13:52:10.608289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:11.407 [2024-12-06 13:52:10.680130] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:11.407 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:11.407 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:11.407 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.4sBmVr7jSK 00:13:11.666 13:52:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:11.924 [2024-12-06 13:52:11.181898] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:11.924 TLSTESTn1 00:13:11.924 13:52:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:13:12.182 Running I/O for 10 seconds... 00:13:14.056 4513.00 IOPS, 17.63 MiB/s [2024-12-06T13:52:14.395Z] 4639.00 IOPS, 18.12 MiB/s [2024-12-06T13:52:15.785Z] 4679.67 IOPS, 18.28 MiB/s [2024-12-06T13:52:16.719Z] 4714.00 IOPS, 18.41 MiB/s [2024-12-06T13:52:17.702Z] 4726.00 IOPS, 18.46 MiB/s [2024-12-06T13:52:18.636Z] 4739.00 IOPS, 18.51 MiB/s [2024-12-06T13:52:19.570Z] 4742.14 IOPS, 18.52 MiB/s [2024-12-06T13:52:20.532Z] 4737.00 IOPS, 18.50 MiB/s [2024-12-06T13:52:21.486Z] 4681.78 IOPS, 18.29 MiB/s [2024-12-06T13:52:21.486Z] 4647.60 IOPS, 18.15 MiB/s 00:13:22.082 Latency(us) 00:13:22.082 [2024-12-06T13:52:21.486Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:22.082 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:13:22.082 Verification LBA range: start 0x0 length 0x2000 00:13:22.082 TLSTESTn1 : 10.02 4649.58 18.16 0.00 0.00 27473.09 5928.03 25022.84 00:13:22.082 [2024-12-06T13:52:21.486Z] =================================================================================================================== 00:13:22.082 [2024-12-06T13:52:21.486Z] Total : 4649.58 18.16 0.00 0.00 27473.09 5928.03 25022.84 00:13:22.082 { 00:13:22.082 "results": [ 00:13:22.082 { 00:13:22.082 "job": "TLSTESTn1", 00:13:22.082 "core_mask": "0x4", 00:13:22.082 "workload": "verify", 00:13:22.082 "status": "finished", 00:13:22.082 "verify_range": { 00:13:22.082 "start": 0, 00:13:22.082 "length": 8192 00:13:22.082 }, 00:13:22.082 "queue_depth": 128, 00:13:22.082 "io_size": 4096, 00:13:22.082 "runtime": 10.022831, 00:13:22.082 "iops": 4649.584533551449, 00:13:22.082 "mibps": 18.162439584185346, 00:13:22.082 "io_failed": 0, 00:13:22.082 "io_timeout": 0, 00:13:22.082 "avg_latency_us": 27473.08849015454, 00:13:22.082 "min_latency_us": 5928.029090909091, 00:13:22.082 "max_latency_us": 25022.836363636365 00:13:22.082 } 00:13:22.082 ], 00:13:22.082 "core_count": 1 00:13:22.082 } 00:13:22.082 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:22.082 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 71673 00:13:22.082 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71673 ']' 00:13:22.082 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71673 00:13:22.082 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:22.082 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:22.082 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71673 00:13:22.082 killing process with pid 71673 00:13:22.082 Received shutdown signal, test time was about 10.000000 seconds 00:13:22.082 00:13:22.082 Latency(us) 00:13:22.082 [2024-12-06T13:52:21.486Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:22.082 [2024-12-06T13:52:21.486Z] =================================================================================================================== 00:13:22.082 [2024-12-06T13:52:21.486Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:22.082 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:13:22.082 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:13:22.082 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71673' 00:13:22.082 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71673 00:13:22.082 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71673 00:13:22.341 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.4sBmVr7jSK 00:13:22.341 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.4sBmVr7jSK 00:13:22.341 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:13:22.341 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.4sBmVr7jSK 00:13:22.341 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:13:22.341 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:22.341 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:13:22.341 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:22.341 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.4sBmVr7jSK 00:13:22.341 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:22.341 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:22.341 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:22.341 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.4sBmVr7jSK 00:13:22.341 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:22.341 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71801 00:13:22.341 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:22.341 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:22.341 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71801 /var/tmp/bdevperf.sock 00:13:22.341 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71801 ']' 00:13:22.341 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:22.341 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:22.341 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:22.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:22.341 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:22.341 13:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:22.600 [2024-12-06 13:52:21.755790] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:13:22.600 [2024-12-06 13:52:21.755924] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71801 ] 00:13:22.600 [2024-12-06 13:52:21.903356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:22.600 [2024-12-06 13:52:21.948498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:22.859 [2024-12-06 13:52:22.017650] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:23.424 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:23.425 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:23.425 13:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.4sBmVr7jSK 00:13:23.683 [2024-12-06 13:52:22.998869] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.4sBmVr7jSK': 0100666 00:13:23.683 [2024-12-06 13:52:22.998918] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:13:23.683 request: 00:13:23.683 { 00:13:23.683 "name": "key0", 00:13:23.683 "path": "/tmp/tmp.4sBmVr7jSK", 00:13:23.683 "method": "keyring_file_add_key", 00:13:23.683 "req_id": 1 00:13:23.683 } 00:13:23.683 Got JSON-RPC error response 00:13:23.683 response: 00:13:23.683 { 00:13:23.683 "code": -1, 00:13:23.683 "message": "Operation not permitted" 00:13:23.683 } 00:13:23.683 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:23.942 [2024-12-06 13:52:23.259017] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:23.942 [2024-12-06 13:52:23.259311] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:13:23.942 request: 00:13:23.942 { 00:13:23.942 "name": "TLSTEST", 00:13:23.942 "trtype": "tcp", 00:13:23.942 "traddr": "10.0.0.3", 00:13:23.942 "adrfam": "ipv4", 00:13:23.942 "trsvcid": "4420", 00:13:23.942 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:23.942 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:23.942 "prchk_reftag": false, 00:13:23.942 "prchk_guard": false, 00:13:23.942 "hdgst": false, 00:13:23.942 "ddgst": false, 00:13:23.942 "psk": "key0", 00:13:23.942 "allow_unrecognized_csi": false, 00:13:23.942 "method": "bdev_nvme_attach_controller", 00:13:23.942 "req_id": 1 00:13:23.942 } 00:13:23.942 Got JSON-RPC error response 00:13:23.942 response: 00:13:23.942 { 00:13:23.942 "code": -126, 00:13:23.942 "message": "Required key not available" 00:13:23.942 } 00:13:23.942 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71801 00:13:23.942 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71801 ']' 00:13:23.942 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71801 00:13:23.942 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:23.942 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:23.942 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71801 00:13:23.942 killing process with pid 71801 00:13:23.942 Received shutdown signal, test time was about 10.000000 seconds 00:13:23.942 00:13:23.942 Latency(us) 00:13:23.942 [2024-12-06T13:52:23.346Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:23.942 [2024-12-06T13:52:23.346Z] =================================================================================================================== 00:13:23.942 [2024-12-06T13:52:23.346Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:23.942 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:13:23.942 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:13:23.942 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71801' 00:13:23.942 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71801 00:13:23.942 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71801 00:13:24.201 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:13:24.201 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:13:24.201 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:24.201 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:24.201 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:24.201 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 71625 00:13:24.201 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71625 ']' 00:13:24.202 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71625 00:13:24.202 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:24.202 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:24.202 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71625 00:13:24.202 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:24.202 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:24.202 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71625' 00:13:24.202 killing process with pid 71625 00:13:24.202 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71625 00:13:24.202 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71625 00:13:24.460 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:13:24.460 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:24.460 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:24.460 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:24.460 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:24.460 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71839 00:13:24.460 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71839 00:13:24.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:24.460 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71839 ']' 00:13:24.460 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:24.460 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:24.460 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:24.460 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:24.460 13:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:24.460 [2024-12-06 13:52:23.832618] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:13:24.460 [2024-12-06 13:52:23.832833] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:24.719 [2024-12-06 13:52:23.974035] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:24.719 [2024-12-06 13:52:24.027325] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:24.719 [2024-12-06 13:52:24.027652] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:24.719 [2024-12-06 13:52:24.027826] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:24.719 [2024-12-06 13:52:24.027878] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:24.719 [2024-12-06 13:52:24.027971] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:24.719 [2024-12-06 13:52:24.028397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:24.719 [2024-12-06 13:52:24.082211] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:24.978 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:24.978 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:24.978 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:24.978 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:24.978 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:24.978 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:24.978 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.4sBmVr7jSK 00:13:24.978 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:13:24.978 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.4sBmVr7jSK 00:13:24.978 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:13:24.978 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:24.978 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:13:24.978 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:24.978 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.4sBmVr7jSK 00:13:24.978 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.4sBmVr7jSK 00:13:24.978 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:25.237 [2024-12-06 13:52:24.469999] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:25.237 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:25.496 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:13:25.755 [2024-12-06 13:52:24.982123] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:25.755 [2024-12-06 13:52:24.982344] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:25.755 13:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:26.013 malloc0 00:13:26.013 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:26.272 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.4sBmVr7jSK 00:13:26.272 [2024-12-06 13:52:25.639012] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.4sBmVr7jSK': 0100666 00:13:26.272 [2024-12-06 13:52:25.639237] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:13:26.272 request: 00:13:26.272 { 00:13:26.272 "name": "key0", 00:13:26.272 "path": "/tmp/tmp.4sBmVr7jSK", 00:13:26.272 "method": "keyring_file_add_key", 00:13:26.272 "req_id": 1 00:13:26.272 } 00:13:26.272 Got JSON-RPC error response 00:13:26.272 response: 00:13:26.272 { 00:13:26.272 "code": -1, 00:13:26.272 "message": "Operation not permitted" 00:13:26.272 } 00:13:26.272 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:13:26.530 [2024-12-06 13:52:25.859070] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:13:26.530 [2024-12-06 13:52:25.859312] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:13:26.530 request: 00:13:26.530 { 00:13:26.530 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:26.530 "host": "nqn.2016-06.io.spdk:host1", 00:13:26.530 "psk": "key0", 00:13:26.530 "method": "nvmf_subsystem_add_host", 00:13:26.530 "req_id": 1 00:13:26.530 } 00:13:26.530 Got JSON-RPC error response 00:13:26.530 response: 00:13:26.530 { 00:13:26.530 "code": -32603, 00:13:26.531 "message": "Internal error" 00:13:26.531 } 00:13:26.531 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:13:26.531 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:26.531 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:26.531 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:26.531 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 71839 00:13:26.531 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71839 ']' 00:13:26.531 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71839 00:13:26.531 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:26.531 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:26.531 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71839 00:13:26.531 killing process with pid 71839 00:13:26.531 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:26.531 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:26.531 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71839' 00:13:26.531 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71839 00:13:26.531 13:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71839 00:13:26.790 13:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.4sBmVr7jSK 00:13:26.790 13:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:13:26.790 13:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:26.790 13:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:26.790 13:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:26.790 13:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71902 00:13:26.790 13:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:26.790 13:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71902 00:13:26.790 13:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71902 ']' 00:13:26.790 13:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:26.790 13:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:26.790 13:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:26.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:26.790 13:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:26.790 13:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:27.050 [2024-12-06 13:52:26.231860] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:13:27.050 [2024-12-06 13:52:26.232056] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:27.050 [2024-12-06 13:52:26.372013] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:27.050 [2024-12-06 13:52:26.418711] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:27.050 [2024-12-06 13:52:26.418766] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:27.050 [2024-12-06 13:52:26.418776] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:27.050 [2024-12-06 13:52:26.418784] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:27.050 [2024-12-06 13:52:26.418790] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:27.050 [2024-12-06 13:52:26.419177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:27.310 [2024-12-06 13:52:26.488682] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:27.878 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:27.878 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:27.878 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:27.878 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:27.878 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:27.878 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:27.878 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.4sBmVr7jSK 00:13:27.878 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.4sBmVr7jSK 00:13:27.878 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:28.137 [2024-12-06 13:52:27.413931] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:28.137 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:28.396 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:13:28.655 [2024-12-06 13:52:27.821969] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:28.655 [2024-12-06 13:52:27.822177] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:28.656 13:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:28.915 malloc0 00:13:28.915 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:29.174 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.4sBmVr7jSK 00:13:29.434 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:13:29.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:29.434 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=71953 00:13:29.434 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:29.434 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:29.434 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 71953 /var/tmp/bdevperf.sock 00:13:29.434 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71953 ']' 00:13:29.434 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:29.434 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:29.435 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:29.435 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:29.435 13:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:29.694 [2024-12-06 13:52:28.881452] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:13:29.694 [2024-12-06 13:52:28.881797] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71953 ] 00:13:29.694 [2024-12-06 13:52:29.030246] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:29.694 [2024-12-06 13:52:29.091040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:29.953 [2024-12-06 13:52:29.164655] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:30.520 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:30.520 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:30.520 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.4sBmVr7jSK 00:13:30.779 13:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:30.779 [2024-12-06 13:52:30.153511] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:31.037 TLSTESTn1 00:13:31.037 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:13:31.296 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:13:31.296 "subsystems": [ 00:13:31.296 { 00:13:31.296 "subsystem": "keyring", 00:13:31.296 "config": [ 00:13:31.296 { 00:13:31.296 "method": "keyring_file_add_key", 00:13:31.296 "params": { 00:13:31.296 "name": "key0", 00:13:31.296 "path": "/tmp/tmp.4sBmVr7jSK" 00:13:31.296 } 00:13:31.296 } 00:13:31.296 ] 00:13:31.296 }, 00:13:31.296 { 00:13:31.296 "subsystem": "iobuf", 00:13:31.296 "config": [ 00:13:31.296 { 00:13:31.296 "method": "iobuf_set_options", 00:13:31.296 "params": { 00:13:31.296 "small_pool_count": 8192, 00:13:31.296 "large_pool_count": 1024, 00:13:31.296 "small_bufsize": 8192, 00:13:31.296 "large_bufsize": 135168, 00:13:31.296 "enable_numa": false 00:13:31.296 } 00:13:31.296 } 00:13:31.296 ] 00:13:31.296 }, 00:13:31.296 { 00:13:31.296 "subsystem": "sock", 00:13:31.296 "config": [ 00:13:31.296 { 00:13:31.296 "method": "sock_set_default_impl", 00:13:31.296 "params": { 00:13:31.296 "impl_name": "uring" 00:13:31.296 } 00:13:31.296 }, 00:13:31.296 { 00:13:31.296 "method": "sock_impl_set_options", 00:13:31.296 "params": { 00:13:31.296 "impl_name": "ssl", 00:13:31.296 "recv_buf_size": 4096, 00:13:31.296 "send_buf_size": 4096, 00:13:31.296 "enable_recv_pipe": true, 00:13:31.296 "enable_quickack": false, 00:13:31.296 "enable_placement_id": 0, 00:13:31.296 "enable_zerocopy_send_server": true, 00:13:31.296 "enable_zerocopy_send_client": false, 00:13:31.296 "zerocopy_threshold": 0, 00:13:31.296 "tls_version": 0, 00:13:31.296 "enable_ktls": false 00:13:31.296 } 00:13:31.296 }, 00:13:31.296 { 00:13:31.296 "method": "sock_impl_set_options", 00:13:31.296 "params": { 00:13:31.296 "impl_name": "posix", 00:13:31.296 "recv_buf_size": 2097152, 00:13:31.296 "send_buf_size": 2097152, 00:13:31.296 "enable_recv_pipe": true, 00:13:31.296 "enable_quickack": false, 00:13:31.296 "enable_placement_id": 0, 00:13:31.296 "enable_zerocopy_send_server": true, 00:13:31.296 "enable_zerocopy_send_client": false, 00:13:31.296 "zerocopy_threshold": 0, 00:13:31.296 "tls_version": 0, 00:13:31.296 "enable_ktls": false 00:13:31.296 } 00:13:31.296 }, 00:13:31.296 { 00:13:31.296 "method": "sock_impl_set_options", 00:13:31.296 "params": { 00:13:31.296 "impl_name": "uring", 00:13:31.296 "recv_buf_size": 2097152, 00:13:31.296 "send_buf_size": 2097152, 00:13:31.296 "enable_recv_pipe": true, 00:13:31.296 "enable_quickack": false, 00:13:31.296 "enable_placement_id": 0, 00:13:31.296 "enable_zerocopy_send_server": false, 00:13:31.296 "enable_zerocopy_send_client": false, 00:13:31.296 "zerocopy_threshold": 0, 00:13:31.296 "tls_version": 0, 00:13:31.296 "enable_ktls": false 00:13:31.296 } 00:13:31.296 } 00:13:31.296 ] 00:13:31.296 }, 00:13:31.296 { 00:13:31.296 "subsystem": "vmd", 00:13:31.296 "config": [] 00:13:31.296 }, 00:13:31.296 { 00:13:31.296 "subsystem": "accel", 00:13:31.296 "config": [ 00:13:31.296 { 00:13:31.296 "method": "accel_set_options", 00:13:31.296 "params": { 00:13:31.296 "small_cache_size": 128, 00:13:31.296 "large_cache_size": 16, 00:13:31.296 "task_count": 2048, 00:13:31.296 "sequence_count": 2048, 00:13:31.297 "buf_count": 2048 00:13:31.297 } 00:13:31.297 } 00:13:31.297 ] 00:13:31.297 }, 00:13:31.297 { 00:13:31.297 "subsystem": "bdev", 00:13:31.297 "config": [ 00:13:31.297 { 00:13:31.297 "method": "bdev_set_options", 00:13:31.297 "params": { 00:13:31.297 "bdev_io_pool_size": 65535, 00:13:31.297 "bdev_io_cache_size": 256, 00:13:31.297 "bdev_auto_examine": true, 00:13:31.297 "iobuf_small_cache_size": 128, 00:13:31.297 "iobuf_large_cache_size": 16 00:13:31.297 } 00:13:31.297 }, 00:13:31.297 { 00:13:31.297 "method": "bdev_raid_set_options", 00:13:31.297 "params": { 00:13:31.297 "process_window_size_kb": 1024, 00:13:31.297 "process_max_bandwidth_mb_sec": 0 00:13:31.297 } 00:13:31.297 }, 00:13:31.297 { 00:13:31.297 "method": "bdev_iscsi_set_options", 00:13:31.297 "params": { 00:13:31.297 "timeout_sec": 30 00:13:31.297 } 00:13:31.297 }, 00:13:31.297 { 00:13:31.297 "method": "bdev_nvme_set_options", 00:13:31.297 "params": { 00:13:31.297 "action_on_timeout": "none", 00:13:31.297 "timeout_us": 0, 00:13:31.297 "timeout_admin_us": 0, 00:13:31.297 "keep_alive_timeout_ms": 10000, 00:13:31.297 "arbitration_burst": 0, 00:13:31.297 "low_priority_weight": 0, 00:13:31.297 "medium_priority_weight": 0, 00:13:31.297 "high_priority_weight": 0, 00:13:31.297 "nvme_adminq_poll_period_us": 10000, 00:13:31.297 "nvme_ioq_poll_period_us": 0, 00:13:31.297 "io_queue_requests": 0, 00:13:31.297 "delay_cmd_submit": true, 00:13:31.297 "transport_retry_count": 4, 00:13:31.297 "bdev_retry_count": 3, 00:13:31.297 "transport_ack_timeout": 0, 00:13:31.297 "ctrlr_loss_timeout_sec": 0, 00:13:31.297 "reconnect_delay_sec": 0, 00:13:31.297 "fast_io_fail_timeout_sec": 0, 00:13:31.297 "disable_auto_failback": false, 00:13:31.297 "generate_uuids": false, 00:13:31.297 "transport_tos": 0, 00:13:31.297 "nvme_error_stat": false, 00:13:31.297 "rdma_srq_size": 0, 00:13:31.297 "io_path_stat": false, 00:13:31.297 "allow_accel_sequence": false, 00:13:31.297 "rdma_max_cq_size": 0, 00:13:31.297 "rdma_cm_event_timeout_ms": 0, 00:13:31.297 "dhchap_digests": [ 00:13:31.297 "sha256", 00:13:31.297 "sha384", 00:13:31.297 "sha512" 00:13:31.297 ], 00:13:31.297 "dhchap_dhgroups": [ 00:13:31.297 "null", 00:13:31.297 "ffdhe2048", 00:13:31.297 "ffdhe3072", 00:13:31.297 "ffdhe4096", 00:13:31.297 "ffdhe6144", 00:13:31.297 "ffdhe8192" 00:13:31.297 ] 00:13:31.297 } 00:13:31.297 }, 00:13:31.297 { 00:13:31.297 "method": "bdev_nvme_set_hotplug", 00:13:31.297 "params": { 00:13:31.297 "period_us": 100000, 00:13:31.297 "enable": false 00:13:31.297 } 00:13:31.297 }, 00:13:31.297 { 00:13:31.297 "method": "bdev_malloc_create", 00:13:31.297 "params": { 00:13:31.297 "name": "malloc0", 00:13:31.297 "num_blocks": 8192, 00:13:31.297 "block_size": 4096, 00:13:31.297 "physical_block_size": 4096, 00:13:31.297 "uuid": "92406d34-e57f-40fc-9447-88ecfbe87b11", 00:13:31.297 "optimal_io_boundary": 0, 00:13:31.297 "md_size": 0, 00:13:31.297 "dif_type": 0, 00:13:31.297 "dif_is_head_of_md": false, 00:13:31.297 "dif_pi_format": 0 00:13:31.297 } 00:13:31.297 }, 00:13:31.297 { 00:13:31.297 "method": "bdev_wait_for_examine" 00:13:31.297 } 00:13:31.297 ] 00:13:31.297 }, 00:13:31.297 { 00:13:31.297 "subsystem": "nbd", 00:13:31.297 "config": [] 00:13:31.297 }, 00:13:31.297 { 00:13:31.297 "subsystem": "scheduler", 00:13:31.297 "config": [ 00:13:31.297 { 00:13:31.297 "method": "framework_set_scheduler", 00:13:31.297 "params": { 00:13:31.297 "name": "static" 00:13:31.297 } 00:13:31.297 } 00:13:31.297 ] 00:13:31.297 }, 00:13:31.297 { 00:13:31.297 "subsystem": "nvmf", 00:13:31.297 "config": [ 00:13:31.297 { 00:13:31.297 "method": "nvmf_set_config", 00:13:31.297 "params": { 00:13:31.297 "discovery_filter": "match_any", 00:13:31.297 "admin_cmd_passthru": { 00:13:31.297 "identify_ctrlr": false 00:13:31.297 }, 00:13:31.297 "dhchap_digests": [ 00:13:31.297 "sha256", 00:13:31.297 "sha384", 00:13:31.297 "sha512" 00:13:31.297 ], 00:13:31.297 "dhchap_dhgroups": [ 00:13:31.297 "null", 00:13:31.297 "ffdhe2048", 00:13:31.297 "ffdhe3072", 00:13:31.297 "ffdhe4096", 00:13:31.297 "ffdhe6144", 00:13:31.297 "ffdhe8192" 00:13:31.297 ] 00:13:31.297 } 00:13:31.297 }, 00:13:31.297 { 00:13:31.297 "method": "nvmf_set_max_subsystems", 00:13:31.297 "params": { 00:13:31.297 "max_subsystems": 1024 00:13:31.297 } 00:13:31.297 }, 00:13:31.297 { 00:13:31.297 "method": "nvmf_set_crdt", 00:13:31.297 "params": { 00:13:31.297 "crdt1": 0, 00:13:31.297 "crdt2": 0, 00:13:31.297 "crdt3": 0 00:13:31.297 } 00:13:31.297 }, 00:13:31.297 { 00:13:31.297 "method": "nvmf_create_transport", 00:13:31.297 "params": { 00:13:31.297 "trtype": "TCP", 00:13:31.297 "max_queue_depth": 128, 00:13:31.297 "max_io_qpairs_per_ctrlr": 127, 00:13:31.297 "in_capsule_data_size": 4096, 00:13:31.297 "max_io_size": 131072, 00:13:31.297 "io_unit_size": 131072, 00:13:31.297 "max_aq_depth": 128, 00:13:31.297 "num_shared_buffers": 511, 00:13:31.297 "buf_cache_size": 4294967295, 00:13:31.297 "dif_insert_or_strip": false, 00:13:31.297 "zcopy": false, 00:13:31.297 "c2h_success": false, 00:13:31.297 "sock_priority": 0, 00:13:31.297 "abort_timeout_sec": 1, 00:13:31.297 "ack_timeout": 0, 00:13:31.297 "data_wr_pool_size": 0 00:13:31.297 } 00:13:31.297 }, 00:13:31.297 { 00:13:31.297 "method": "nvmf_create_subsystem", 00:13:31.297 "params": { 00:13:31.297 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:31.297 "allow_any_host": false, 00:13:31.297 "serial_number": "SPDK00000000000001", 00:13:31.297 "model_number": "SPDK bdev Controller", 00:13:31.297 "max_namespaces": 10, 00:13:31.297 "min_cntlid": 1, 00:13:31.297 "max_cntlid": 65519, 00:13:31.297 "ana_reporting": false 00:13:31.297 } 00:13:31.297 }, 00:13:31.297 { 00:13:31.297 "method": "nvmf_subsystem_add_host", 00:13:31.297 "params": { 00:13:31.297 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:31.297 "host": "nqn.2016-06.io.spdk:host1", 00:13:31.297 "psk": "key0" 00:13:31.297 } 00:13:31.297 }, 00:13:31.297 { 00:13:31.297 "method": "nvmf_subsystem_add_ns", 00:13:31.297 "params": { 00:13:31.297 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:31.297 "namespace": { 00:13:31.297 "nsid": 1, 00:13:31.297 "bdev_name": "malloc0", 00:13:31.297 "nguid": "92406D34E57F40FC944788ECFBE87B11", 00:13:31.297 "uuid": "92406d34-e57f-40fc-9447-88ecfbe87b11", 00:13:31.297 "no_auto_visible": false 00:13:31.297 } 00:13:31.297 } 00:13:31.297 }, 00:13:31.297 { 00:13:31.297 "method": "nvmf_subsystem_add_listener", 00:13:31.297 "params": { 00:13:31.297 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:31.297 "listen_address": { 00:13:31.297 "trtype": "TCP", 00:13:31.297 "adrfam": "IPv4", 00:13:31.297 "traddr": "10.0.0.3", 00:13:31.297 "trsvcid": "4420" 00:13:31.297 }, 00:13:31.297 "secure_channel": true 00:13:31.297 } 00:13:31.297 } 00:13:31.297 ] 00:13:31.297 } 00:13:31.297 ] 00:13:31.297 }' 00:13:31.297 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:13:31.557 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:13:31.557 "subsystems": [ 00:13:31.557 { 00:13:31.557 "subsystem": "keyring", 00:13:31.557 "config": [ 00:13:31.557 { 00:13:31.557 "method": "keyring_file_add_key", 00:13:31.557 "params": { 00:13:31.557 "name": "key0", 00:13:31.557 "path": "/tmp/tmp.4sBmVr7jSK" 00:13:31.557 } 00:13:31.557 } 00:13:31.557 ] 00:13:31.557 }, 00:13:31.557 { 00:13:31.557 "subsystem": "iobuf", 00:13:31.557 "config": [ 00:13:31.557 { 00:13:31.557 "method": "iobuf_set_options", 00:13:31.557 "params": { 00:13:31.557 "small_pool_count": 8192, 00:13:31.557 "large_pool_count": 1024, 00:13:31.557 "small_bufsize": 8192, 00:13:31.557 "large_bufsize": 135168, 00:13:31.557 "enable_numa": false 00:13:31.557 } 00:13:31.557 } 00:13:31.557 ] 00:13:31.557 }, 00:13:31.557 { 00:13:31.557 "subsystem": "sock", 00:13:31.557 "config": [ 00:13:31.557 { 00:13:31.558 "method": "sock_set_default_impl", 00:13:31.558 "params": { 00:13:31.558 "impl_name": "uring" 00:13:31.558 } 00:13:31.558 }, 00:13:31.558 { 00:13:31.558 "method": "sock_impl_set_options", 00:13:31.558 "params": { 00:13:31.558 "impl_name": "ssl", 00:13:31.558 "recv_buf_size": 4096, 00:13:31.558 "send_buf_size": 4096, 00:13:31.558 "enable_recv_pipe": true, 00:13:31.558 "enable_quickack": false, 00:13:31.558 "enable_placement_id": 0, 00:13:31.558 "enable_zerocopy_send_server": true, 00:13:31.558 "enable_zerocopy_send_client": false, 00:13:31.558 "zerocopy_threshold": 0, 00:13:31.558 "tls_version": 0, 00:13:31.558 "enable_ktls": false 00:13:31.558 } 00:13:31.558 }, 00:13:31.558 { 00:13:31.558 "method": "sock_impl_set_options", 00:13:31.558 "params": { 00:13:31.558 "impl_name": "posix", 00:13:31.558 "recv_buf_size": 2097152, 00:13:31.558 "send_buf_size": 2097152, 00:13:31.558 "enable_recv_pipe": true, 00:13:31.558 "enable_quickack": false, 00:13:31.558 "enable_placement_id": 0, 00:13:31.558 "enable_zerocopy_send_server": true, 00:13:31.558 "enable_zerocopy_send_client": false, 00:13:31.558 "zerocopy_threshold": 0, 00:13:31.558 "tls_version": 0, 00:13:31.558 "enable_ktls": false 00:13:31.558 } 00:13:31.558 }, 00:13:31.558 { 00:13:31.558 "method": "sock_impl_set_options", 00:13:31.558 "params": { 00:13:31.558 "impl_name": "uring", 00:13:31.558 "recv_buf_size": 2097152, 00:13:31.558 "send_buf_size": 2097152, 00:13:31.558 "enable_recv_pipe": true, 00:13:31.558 "enable_quickack": false, 00:13:31.558 "enable_placement_id": 0, 00:13:31.558 "enable_zerocopy_send_server": false, 00:13:31.558 "enable_zerocopy_send_client": false, 00:13:31.558 "zerocopy_threshold": 0, 00:13:31.558 "tls_version": 0, 00:13:31.558 "enable_ktls": false 00:13:31.558 } 00:13:31.558 } 00:13:31.558 ] 00:13:31.558 }, 00:13:31.558 { 00:13:31.558 "subsystem": "vmd", 00:13:31.558 "config": [] 00:13:31.558 }, 00:13:31.558 { 00:13:31.558 "subsystem": "accel", 00:13:31.558 "config": [ 00:13:31.558 { 00:13:31.558 "method": "accel_set_options", 00:13:31.558 "params": { 00:13:31.558 "small_cache_size": 128, 00:13:31.558 "large_cache_size": 16, 00:13:31.558 "task_count": 2048, 00:13:31.558 "sequence_count": 2048, 00:13:31.558 "buf_count": 2048 00:13:31.558 } 00:13:31.558 } 00:13:31.558 ] 00:13:31.558 }, 00:13:31.558 { 00:13:31.558 "subsystem": "bdev", 00:13:31.558 "config": [ 00:13:31.558 { 00:13:31.558 "method": "bdev_set_options", 00:13:31.558 "params": { 00:13:31.558 "bdev_io_pool_size": 65535, 00:13:31.558 "bdev_io_cache_size": 256, 00:13:31.558 "bdev_auto_examine": true, 00:13:31.558 "iobuf_small_cache_size": 128, 00:13:31.558 "iobuf_large_cache_size": 16 00:13:31.558 } 00:13:31.558 }, 00:13:31.558 { 00:13:31.558 "method": "bdev_raid_set_options", 00:13:31.558 "params": { 00:13:31.558 "process_window_size_kb": 1024, 00:13:31.558 "process_max_bandwidth_mb_sec": 0 00:13:31.558 } 00:13:31.558 }, 00:13:31.558 { 00:13:31.558 "method": "bdev_iscsi_set_options", 00:13:31.558 "params": { 00:13:31.558 "timeout_sec": 30 00:13:31.558 } 00:13:31.558 }, 00:13:31.558 { 00:13:31.558 "method": "bdev_nvme_set_options", 00:13:31.558 "params": { 00:13:31.558 "action_on_timeout": "none", 00:13:31.558 "timeout_us": 0, 00:13:31.558 "timeout_admin_us": 0, 00:13:31.558 "keep_alive_timeout_ms": 10000, 00:13:31.558 "arbitration_burst": 0, 00:13:31.558 "low_priority_weight": 0, 00:13:31.558 "medium_priority_weight": 0, 00:13:31.558 "high_priority_weight": 0, 00:13:31.558 "nvme_adminq_poll_period_us": 10000, 00:13:31.558 "nvme_ioq_poll_period_us": 0, 00:13:31.558 "io_queue_requests": 512, 00:13:31.558 "delay_cmd_submit": true, 00:13:31.558 "transport_retry_count": 4, 00:13:31.558 "bdev_retry_count": 3, 00:13:31.558 "transport_ack_timeout": 0, 00:13:31.558 "ctrlr_loss_timeout_sec": 0, 00:13:31.558 "reconnect_delay_sec": 0, 00:13:31.558 "fast_io_fail_timeout_sec": 0, 00:13:31.558 "disable_auto_failback": false, 00:13:31.558 "generate_uuids": false, 00:13:31.558 "transport_tos": 0, 00:13:31.558 "nvme_error_stat": false, 00:13:31.558 "rdma_srq_size": 0, 00:13:31.558 "io_path_stat": false, 00:13:31.558 "allow_accel_sequence": false, 00:13:31.558 "rdma_max_cq_size": 0, 00:13:31.558 "rdma_cm_event_timeout_ms": 0, 00:13:31.558 "dhchap_digests": [ 00:13:31.558 "sha256", 00:13:31.558 "sha384", 00:13:31.558 "sha512" 00:13:31.558 ], 00:13:31.558 "dhchap_dhgroups": [ 00:13:31.558 "null", 00:13:31.558 "ffdhe2048", 00:13:31.558 "ffdhe3072", 00:13:31.558 "ffdhe4096", 00:13:31.558 "ffdhe6144", 00:13:31.558 "ffdhe8192" 00:13:31.558 ] 00:13:31.558 } 00:13:31.558 }, 00:13:31.558 { 00:13:31.558 "method": "bdev_nvme_attach_controller", 00:13:31.558 "params": { 00:13:31.558 "name": "TLSTEST", 00:13:31.558 "trtype": "TCP", 00:13:31.558 "adrfam": "IPv4", 00:13:31.558 "traddr": "10.0.0.3", 00:13:31.558 "trsvcid": "4420", 00:13:31.558 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:31.558 "prchk_reftag": false, 00:13:31.558 "prchk_guard": false, 00:13:31.558 "ctrlr_loss_timeout_sec": 0, 00:13:31.558 "reconnect_delay_sec": 0, 00:13:31.558 "fast_io_fail_timeout_sec": 0, 00:13:31.558 "psk": "key0", 00:13:31.558 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:31.558 "hdgst": false, 00:13:31.558 "ddgst": false, 00:13:31.558 "multipath": "multipath" 00:13:31.558 } 00:13:31.558 }, 00:13:31.558 { 00:13:31.558 "method": "bdev_nvme_set_hotplug", 00:13:31.558 "params": { 00:13:31.558 "period_us": 100000, 00:13:31.558 "enable": false 00:13:31.558 } 00:13:31.558 }, 00:13:31.558 { 00:13:31.558 "method": "bdev_wait_for_examine" 00:13:31.558 } 00:13:31.558 ] 00:13:31.558 }, 00:13:31.558 { 00:13:31.558 "subsystem": "nbd", 00:13:31.558 "config": [] 00:13:31.558 } 00:13:31.558 ] 00:13:31.558 }' 00:13:31.558 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 71953 00:13:31.558 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71953 ']' 00:13:31.558 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71953 00:13:31.558 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:31.558 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:31.558 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71953 00:13:31.558 killing process with pid 71953 00:13:31.558 Received shutdown signal, test time was about 10.000000 seconds 00:13:31.558 00:13:31.558 Latency(us) 00:13:31.558 [2024-12-06T13:52:30.962Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:31.558 [2024-12-06T13:52:30.962Z] =================================================================================================================== 00:13:31.558 [2024-12-06T13:52:30.962Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:31.558 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:13:31.558 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:13:31.558 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71953' 00:13:31.558 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71953 00:13:31.558 13:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71953 00:13:31.818 13:52:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 71902 00:13:31.818 13:52:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71902 ']' 00:13:31.818 13:52:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71902 00:13:31.818 13:52:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:31.818 13:52:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:31.818 13:52:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71902 00:13:31.818 killing process with pid 71902 00:13:31.818 13:52:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:31.818 13:52:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:31.818 13:52:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71902' 00:13:31.818 13:52:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71902 00:13:31.818 13:52:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71902 00:13:32.078 13:52:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:13:32.078 13:52:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:32.078 13:52:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:32.078 13:52:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:32.078 13:52:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:13:32.078 "subsystems": [ 00:13:32.078 { 00:13:32.078 "subsystem": "keyring", 00:13:32.078 "config": [ 00:13:32.078 { 00:13:32.078 "method": "keyring_file_add_key", 00:13:32.078 "params": { 00:13:32.078 "name": "key0", 00:13:32.078 "path": "/tmp/tmp.4sBmVr7jSK" 00:13:32.078 } 00:13:32.078 } 00:13:32.078 ] 00:13:32.078 }, 00:13:32.078 { 00:13:32.078 "subsystem": "iobuf", 00:13:32.078 "config": [ 00:13:32.078 { 00:13:32.078 "method": "iobuf_set_options", 00:13:32.078 "params": { 00:13:32.078 "small_pool_count": 8192, 00:13:32.078 "large_pool_count": 1024, 00:13:32.078 "small_bufsize": 8192, 00:13:32.078 "large_bufsize": 135168, 00:13:32.078 "enable_numa": false 00:13:32.078 } 00:13:32.078 } 00:13:32.078 ] 00:13:32.078 }, 00:13:32.078 { 00:13:32.078 "subsystem": "sock", 00:13:32.078 "config": [ 00:13:32.078 { 00:13:32.078 "method": "sock_set_default_impl", 00:13:32.078 "params": { 00:13:32.078 "impl_name": "uring" 00:13:32.078 } 00:13:32.078 }, 00:13:32.078 { 00:13:32.078 "method": "sock_impl_set_options", 00:13:32.078 "params": { 00:13:32.078 "impl_name": "ssl", 00:13:32.078 "recv_buf_size": 4096, 00:13:32.078 "send_buf_size": 4096, 00:13:32.078 "enable_recv_pipe": true, 00:13:32.078 "enable_quickack": false, 00:13:32.078 "enable_placement_id": 0, 00:13:32.078 "enable_zerocopy_send_server": true, 00:13:32.078 "enable_zerocopy_send_client": false, 00:13:32.078 "zerocopy_threshold": 0, 00:13:32.078 "tls_version": 0, 00:13:32.078 "enable_ktls": false 00:13:32.078 } 00:13:32.078 }, 00:13:32.078 { 00:13:32.078 "method": "sock_impl_set_options", 00:13:32.078 "params": { 00:13:32.078 "impl_name": "posix", 00:13:32.078 "recv_buf_size": 2097152, 00:13:32.078 "send_buf_size": 2097152, 00:13:32.078 "enable_recv_pipe": true, 00:13:32.078 "enable_quickack": false, 00:13:32.078 "enable_placement_id": 0, 00:13:32.078 "enable_zerocopy_send_server": true, 00:13:32.078 "enable_zerocopy_send_client": false, 00:13:32.078 "zerocopy_threshold": 0, 00:13:32.078 "tls_version": 0, 00:13:32.078 "enable_ktls": false 00:13:32.078 } 00:13:32.078 }, 00:13:32.078 { 00:13:32.078 "method": "sock_impl_set_options", 00:13:32.078 "params": { 00:13:32.078 "impl_name": "uring", 00:13:32.078 "recv_buf_size": 2097152, 00:13:32.078 "send_buf_size": 2097152, 00:13:32.078 "enable_recv_pipe": true, 00:13:32.078 "enable_quickack": false, 00:13:32.078 "enable_placement_id": 0, 00:13:32.078 "enable_zerocopy_send_server": false, 00:13:32.078 "enable_zerocopy_send_client": false, 00:13:32.078 "zerocopy_threshold": 0, 00:13:32.078 "tls_version": 0, 00:13:32.078 "enable_ktls": false 00:13:32.078 } 00:13:32.078 } 00:13:32.078 ] 00:13:32.078 }, 00:13:32.078 { 00:13:32.078 "subsystem": "vmd", 00:13:32.078 "config": [] 00:13:32.078 }, 00:13:32.078 { 00:13:32.078 "subsystem": "accel", 00:13:32.078 "config": [ 00:13:32.078 { 00:13:32.078 "method": "accel_set_options", 00:13:32.078 "params": { 00:13:32.078 "small_cache_size": 128, 00:13:32.078 "large_cache_size": 16, 00:13:32.078 "task_count": 2048, 00:13:32.078 "sequence_count": 2048, 00:13:32.078 "buf_count": 2048 00:13:32.078 } 00:13:32.078 } 00:13:32.078 ] 00:13:32.078 }, 00:13:32.078 { 00:13:32.078 "subsystem": "bdev", 00:13:32.078 "config": [ 00:13:32.078 { 00:13:32.078 "method": "bdev_set_options", 00:13:32.079 "params": { 00:13:32.079 "bdev_io_pool_size": 65535, 00:13:32.079 "bdev_io_cache_size": 256, 00:13:32.079 "bdev_auto_examine": true, 00:13:32.079 "iobuf_small_cache_size": 128, 00:13:32.079 "iobuf_large_cache_size": 16 00:13:32.079 } 00:13:32.079 }, 00:13:32.079 { 00:13:32.079 "method": "bdev_raid_set_options", 00:13:32.079 "params": { 00:13:32.079 "process_window_size_kb": 1024, 00:13:32.079 "process_max_bandwidth_mb_sec": 0 00:13:32.079 } 00:13:32.079 }, 00:13:32.079 { 00:13:32.079 "method": "bdev_iscsi_set_options", 00:13:32.079 "params": { 00:13:32.079 "timeout_sec": 30 00:13:32.079 } 00:13:32.079 }, 00:13:32.079 { 00:13:32.079 "method": "bdev_nvme_set_options", 00:13:32.079 "params": { 00:13:32.079 "action_on_timeout": "none", 00:13:32.079 "timeout_us": 0, 00:13:32.079 "timeout_admin_us": 0, 00:13:32.079 "keep_alive_timeout_ms": 10000, 00:13:32.079 "arbitration_burst": 0, 00:13:32.079 "low_priority_weight": 0, 00:13:32.079 "medium_priority_weight": 0, 00:13:32.079 "high_priority_weight": 0, 00:13:32.079 "nvme_adminq_poll_period_us": 10000, 00:13:32.079 "nvme_ioq_poll_period_us": 0, 00:13:32.079 "io_queue_requests": 0, 00:13:32.079 "delay_cmd_submit": true, 00:13:32.079 "transport_retry_count": 4, 00:13:32.079 "bdev_retry_count": 3, 00:13:32.079 "transport_ack_timeout": 0, 00:13:32.079 "ctrlr_loss_timeout_sec": 0, 00:13:32.079 "reconnect_delay_sec": 0, 00:13:32.079 "fast_io_fail_timeout_sec": 0, 00:13:32.079 "disable_auto_failback": false, 00:13:32.079 "generate_uuids": false, 00:13:32.079 "transport_tos": 0, 00:13:32.079 "nvme_error_stat": false, 00:13:32.079 "rdma_srq_size": 0, 00:13:32.079 "io_path_stat": false, 00:13:32.079 "allow_accel_sequence": false, 00:13:32.079 "rdma_max_cq_size": 0, 00:13:32.079 "rdma_cm_event_timeout_ms": 0, 00:13:32.079 "dhchap_digests": [ 00:13:32.079 "sha256", 00:13:32.079 "sha384", 00:13:32.079 "sha512" 00:13:32.079 ], 00:13:32.079 "dhchap_dhgroups": [ 00:13:32.079 "null", 00:13:32.079 "ffdhe2048", 00:13:32.079 "ffdhe3072", 00:13:32.079 "ffdhe4096", 00:13:32.079 "ffdhe6144", 00:13:32.079 "ffdhe8192" 00:13:32.079 ] 00:13:32.079 } 00:13:32.079 }, 00:13:32.079 { 00:13:32.079 "method": "bdev_nvme_set_hotplug", 00:13:32.079 "params": { 00:13:32.079 "period_us": 100000, 00:13:32.079 "enable": false 00:13:32.079 } 00:13:32.079 }, 00:13:32.079 { 00:13:32.079 "method": "bdev_malloc_create", 00:13:32.079 "params": { 00:13:32.079 "name": "malloc0", 00:13:32.079 "num_blocks": 8192, 00:13:32.079 "block_size": 4096, 00:13:32.079 "physical_block_size": 4096, 00:13:32.079 "uuid": "92406d34-e57f-40fc-9447-88ecfbe87b11", 00:13:32.079 "optimal_io_boundary": 0, 00:13:32.079 "md_size": 0, 00:13:32.079 "dif_type": 0, 00:13:32.079 "dif_is_head_of_md": false, 00:13:32.079 "dif_pi_format": 0 00:13:32.079 } 00:13:32.079 }, 00:13:32.079 { 00:13:32.079 "method": "bdev_wait_for_examine" 00:13:32.079 } 00:13:32.079 ] 00:13:32.079 }, 00:13:32.079 { 00:13:32.079 "subsystem": "nbd", 00:13:32.079 "config": [] 00:13:32.079 }, 00:13:32.079 { 00:13:32.079 "subsystem": "scheduler", 00:13:32.079 "config": [ 00:13:32.079 { 00:13:32.079 "method": "framework_set_scheduler", 00:13:32.079 "params": { 00:13:32.079 "name": "static" 00:13:32.079 } 00:13:32.079 } 00:13:32.079 ] 00:13:32.079 }, 00:13:32.079 { 00:13:32.079 "subsystem": "nvmf", 00:13:32.079 "config": [ 00:13:32.079 { 00:13:32.079 "method": "nvmf_set_config", 00:13:32.079 "params": { 00:13:32.079 "discovery_filter": "match_any", 00:13:32.079 "admin_cmd_passthru": { 00:13:32.079 "identify_ctrlr": false 00:13:32.079 }, 00:13:32.079 "dhchap_digests": [ 00:13:32.079 "sha256", 00:13:32.079 "sha384", 00:13:32.079 "sha512" 00:13:32.079 ], 00:13:32.079 "dhchap_dhgroups": [ 00:13:32.079 "null", 00:13:32.079 "ffdhe2048", 00:13:32.079 "ffdhe3072", 00:13:32.079 "ffdhe4096", 00:13:32.079 "ffdhe6144", 00:13:32.079 "ffdhe8192" 00:13:32.079 ] 00:13:32.079 } 00:13:32.079 }, 00:13:32.079 { 00:13:32.079 "method": "nvmf_set_max_subsystems", 00:13:32.079 "params": { 00:13:32.079 "max_subsystems": 1024 00:13:32.079 } 00:13:32.079 }, 00:13:32.079 { 00:13:32.079 "method": "nvmf_set_crdt", 00:13:32.079 "params": { 00:13:32.079 "crdt1": 0, 00:13:32.079 "crdt2": 0, 00:13:32.079 "crdt3": 0 00:13:32.079 } 00:13:32.079 }, 00:13:32.079 { 00:13:32.079 "method": "nvmf_create_transport", 00:13:32.079 "params": { 00:13:32.079 "trtype": "TCP", 00:13:32.079 "max_queue_depth": 128, 00:13:32.079 "max_io_qpairs_per_ctrlr": 127, 00:13:32.079 "in_capsule_data_size": 4096, 00:13:32.079 "max_io_size": 131072, 00:13:32.079 "io_unit_size": 131072, 00:13:32.079 "max_aq_depth": 128, 00:13:32.079 "num_shared_buffers": 511, 00:13:32.079 "buf_cache_size": 4294967295, 00:13:32.079 "dif_insert_or_strip": false, 00:13:32.079 "zcopy": false, 00:13:32.079 "c2h_success": false, 00:13:32.079 "sock_priority": 0, 00:13:32.079 "abort_timeout_sec": 1, 00:13:32.079 "ack_timeout": 0, 00:13:32.079 "data_wr_pool_size": 0 00:13:32.079 } 00:13:32.079 }, 00:13:32.079 { 00:13:32.079 "method": "nvmf_create_subsystem", 00:13:32.079 "params": { 00:13:32.079 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:32.079 "allow_any_host": false, 00:13:32.079 "serial_number": "SPDK00000000000001", 00:13:32.079 "model_number": "SPDK bdev Controller", 00:13:32.079 "max_namespaces": 10, 00:13:32.079 "min_cntlid": 1, 00:13:32.079 "max_cntlid": 65519, 00:13:32.079 "ana_reporting": false 00:13:32.079 } 00:13:32.079 }, 00:13:32.079 { 00:13:32.079 "method": "nvmf_subsystem_add_host", 00:13:32.079 "params": { 00:13:32.079 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:32.079 "host": "nqn.2016-06.io.spdk:host1", 00:13:32.079 "psk": "key0" 00:13:32.079 } 00:13:32.079 }, 00:13:32.079 { 00:13:32.079 "method": "nvmf_subsystem_add_ns", 00:13:32.079 "params": { 00:13:32.079 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:32.079 "namespace": { 00:13:32.079 "nsid": 1, 00:13:32.079 "bdev_name": "malloc0", 00:13:32.079 "nguid": "92406D34E57F40FC944788ECFBE87B11", 00:13:32.079 "uuid": "92406d34-e57f-40fc-9447-88ecfbe87b11", 00:13:32.079 "no_auto_visible": false 00:13:32.079 } 00:13:32.079 } 00:13:32.079 }, 00:13:32.079 { 00:13:32.079 "method": "nvmf_subsystem_add_listener", 00:13:32.079 "params": { 00:13:32.079 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:32.079 "listen_address": { 00:13:32.079 "trtype": "TCP", 00:13:32.079 "adrfam": "IPv4", 00:13:32.079 "traddr": "10.0.0.3", 00:13:32.079 "trsvcid": "4420" 00:13:32.079 }, 00:13:32.079 "secure_channel": true 00:13:32.079 } 00:13:32.079 } 00:13:32.079 ] 00:13:32.079 } 00:13:32.079 ] 00:13:32.079 }' 00:13:32.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:32.080 13:52:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72003 00:13:32.080 13:52:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:13:32.080 13:52:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72003 00:13:32.080 13:52:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72003 ']' 00:13:32.080 13:52:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:32.080 13:52:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:32.080 13:52:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:32.080 13:52:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:32.080 13:52:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:32.080 [2024-12-06 13:52:31.401573] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:13:32.080 [2024-12-06 13:52:31.401640] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:32.339 [2024-12-06 13:52:31.543104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:32.339 [2024-12-06 13:52:31.582444] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:32.339 [2024-12-06 13:52:31.582496] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:32.339 [2024-12-06 13:52:31.582506] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:32.339 [2024-12-06 13:52:31.582513] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:32.339 [2024-12-06 13:52:31.582519] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:32.339 [2024-12-06 13:52:31.582879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:32.686 [2024-12-06 13:52:31.753804] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:32.687 [2024-12-06 13:52:31.834603] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:32.687 [2024-12-06 13:52:31.866516] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:32.687 [2024-12-06 13:52:31.866711] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:33.255 13:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:33.255 13:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:33.255 13:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:33.255 13:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:33.255 13:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:33.255 13:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:33.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:33.255 13:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=72035 00:13:33.255 13:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 72035 /var/tmp/bdevperf.sock 00:13:33.255 13:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72035 ']' 00:13:33.255 13:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:13:33.255 13:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:33.255 13:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:33.255 13:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:33.255 13:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:13:33.255 "subsystems": [ 00:13:33.255 { 00:13:33.255 "subsystem": "keyring", 00:13:33.255 "config": [ 00:13:33.255 { 00:13:33.255 "method": "keyring_file_add_key", 00:13:33.255 "params": { 00:13:33.255 "name": "key0", 00:13:33.255 "path": "/tmp/tmp.4sBmVr7jSK" 00:13:33.255 } 00:13:33.255 } 00:13:33.255 ] 00:13:33.255 }, 00:13:33.255 { 00:13:33.255 "subsystem": "iobuf", 00:13:33.255 "config": [ 00:13:33.255 { 00:13:33.255 "method": "iobuf_set_options", 00:13:33.255 "params": { 00:13:33.255 "small_pool_count": 8192, 00:13:33.255 "large_pool_count": 1024, 00:13:33.255 "small_bufsize": 8192, 00:13:33.255 "large_bufsize": 135168, 00:13:33.255 "enable_numa": false 00:13:33.255 } 00:13:33.255 } 00:13:33.255 ] 00:13:33.255 }, 00:13:33.255 { 00:13:33.255 "subsystem": "sock", 00:13:33.255 "config": [ 00:13:33.255 { 00:13:33.255 "method": "sock_set_default_impl", 00:13:33.255 "params": { 00:13:33.255 "impl_name": "uring" 00:13:33.255 } 00:13:33.255 }, 00:13:33.255 { 00:13:33.255 "method": "sock_impl_set_options", 00:13:33.255 "params": { 00:13:33.255 "impl_name": "ssl", 00:13:33.255 "recv_buf_size": 4096, 00:13:33.255 "send_buf_size": 4096, 00:13:33.255 "enable_recv_pipe": true, 00:13:33.255 "enable_quickack": false, 00:13:33.255 "enable_placement_id": 0, 00:13:33.255 "enable_zerocopy_send_server": true, 00:13:33.255 "enable_zerocopy_send_client": false, 00:13:33.255 "zerocopy_threshold": 0, 00:13:33.255 "tls_version": 0, 00:13:33.255 "enable_ktls": false 00:13:33.255 } 00:13:33.255 }, 00:13:33.255 { 00:13:33.255 "method": "sock_impl_set_options", 00:13:33.255 "params": { 00:13:33.255 "impl_name": "posix", 00:13:33.255 "recv_buf_size": 2097152, 00:13:33.255 "send_buf_size": 2097152, 00:13:33.255 "enable_recv_pipe": true, 00:13:33.255 "enable_quickack": false, 00:13:33.255 "enable_placement_id": 0, 00:13:33.255 "enable_zerocopy_send_server": true, 00:13:33.255 "enable_zerocopy_send_client": false, 00:13:33.255 "zerocopy_threshold": 0, 00:13:33.255 "tls_version": 0, 00:13:33.255 "enable_ktls": false 00:13:33.255 } 00:13:33.255 }, 00:13:33.255 { 00:13:33.255 "method": "sock_impl_set_options", 00:13:33.255 "params": { 00:13:33.255 "impl_name": "uring", 00:13:33.255 "recv_buf_size": 2097152, 00:13:33.255 "send_buf_size": 2097152, 00:13:33.255 "enable_recv_pipe": true, 00:13:33.255 "enable_quickack": false, 00:13:33.255 "enable_placement_id": 0, 00:13:33.255 "enable_zerocopy_send_server": false, 00:13:33.255 "enable_zerocopy_send_client": false, 00:13:33.255 "zerocopy_threshold": 0, 00:13:33.255 "tls_version": 0, 00:13:33.255 "enable_ktls": false 00:13:33.255 } 00:13:33.255 } 00:13:33.255 ] 00:13:33.255 }, 00:13:33.256 { 00:13:33.256 "subsystem": "vmd", 00:13:33.256 "config": [] 00:13:33.256 }, 00:13:33.256 { 00:13:33.256 "subsystem": "accel", 00:13:33.256 "config": [ 00:13:33.256 { 00:13:33.256 "method": "accel_set_options", 00:13:33.256 "params": { 00:13:33.256 "small_cache_size": 128, 00:13:33.256 "large_cache_size": 16, 00:13:33.256 "task_count": 2048, 00:13:33.256 "sequence_count": 2048, 00:13:33.256 "buf_count": 2048 00:13:33.256 } 00:13:33.256 } 00:13:33.256 ] 00:13:33.256 }, 00:13:33.256 { 00:13:33.256 "subsystem": "bdev", 00:13:33.256 "config": [ 00:13:33.256 { 00:13:33.256 "method": "bdev_set_options", 00:13:33.256 "params": { 00:13:33.256 "bdev_io_pool_size": 65535, 00:13:33.256 "bdev_io_cache_size": 256, 00:13:33.256 "bdev_auto_examine": true, 00:13:33.256 "iobuf_small_cache_size": 128, 00:13:33.256 "iobuf_large_cache_size": 16 00:13:33.256 } 00:13:33.256 }, 00:13:33.256 { 00:13:33.256 "method": "bdev_raid_set_options", 00:13:33.256 "params": { 00:13:33.256 "process_window_size_kb": 1024, 00:13:33.256 "process_max_bandwidth_mb_sec": 0 00:13:33.256 } 00:13:33.256 }, 00:13:33.256 { 00:13:33.256 "method": "bdev_iscsi_set_options", 00:13:33.256 "params": { 00:13:33.256 "timeout_sec": 30 00:13:33.256 } 00:13:33.256 }, 00:13:33.256 { 00:13:33.256 "method": "bdev_nvme_set_options", 00:13:33.256 "params": { 00:13:33.256 "action_on_timeout": "none", 00:13:33.256 "timeout_us": 0, 00:13:33.256 "timeout_admin_us": 0, 00:13:33.256 "keep_alive_timeout_ms": 10000, 00:13:33.256 "arbitration_burst": 0, 00:13:33.256 "low_priority_weight": 0, 00:13:33.256 "medium_priority_weight": 0, 00:13:33.256 "high_priority_weight": 0, 00:13:33.256 "nvme_adminq_poll_period_us": 10000, 00:13:33.256 "nvme_ioq_poll_period_us": 0, 00:13:33.256 "io_queue_requests": 512, 00:13:33.256 "delay_cmd_submit": true, 00:13:33.256 "transport_retry_count": 4, 00:13:33.256 "bdev_retry_count": 3, 00:13:33.256 "transport_ack_timeout": 0, 00:13:33.256 "ctrlr_loss_timeout_sec": 0, 00:13:33.256 "reconnect_delay_sec": 0, 00:13:33.256 "fast_io_fail_timeout_sec": 0, 00:13:33.256 "disable_auto_failback": false, 00:13:33.256 "generate_uuids": false, 00:13:33.256 "transport_tos": 0, 00:13:33.256 "nvme_error_stat": false, 00:13:33.256 "rdma_srq_size": 0, 00:13:33.256 "io_path_stat": false, 00:13:33.256 "allow_accel_sequence": false, 00:13:33.256 "rdma_max_cq_size": 0, 00:13:33.256 "rdma_cm_event_timeout_ms": 0, 00:13:33.256 "dhchap_digests": [ 00:13:33.256 "sha256", 00:13:33.256 "sha384", 00:13:33.256 "sha512" 00:13:33.256 ], 00:13:33.256 "dhchap_dhgroups": [ 00:13:33.256 "null", 00:13:33.256 "ffdhe2048", 00:13:33.256 "ffdhe3072", 00:13:33.256 "ffdhe4096", 00:13:33.256 "ffdhe6144", 00:13:33.256 "ffdhe8192" 00:13:33.256 ] 00:13:33.256 } 00:13:33.256 }, 00:13:33.256 { 00:13:33.256 "method": "bdev_nvme_attach_controller", 00:13:33.256 "params": { 00:13:33.256 "name": "TLSTEST", 00:13:33.256 "trtype": "TCP", 00:13:33.256 "adrfam": "IPv4", 00:13:33.256 "traddr": "10.0.0.3", 00:13:33.256 "trsvcid": "4420", 00:13:33.256 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:33.256 "prchk_reftag": false, 00:13:33.256 "prchk_guard": false, 00:13:33.256 "ctrlr_loss_timeout_sec": 0, 00:13:33.256 "reconnect_delay_sec": 0, 00:13:33.256 "fast_io_fail_timeout_sec": 0, 00:13:33.256 "psk": "key0", 00:13:33.256 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:33.256 "hdgst": false, 00:13:33.256 "ddgst": false, 00:13:33.256 "multipath": "multipath" 00:13:33.256 } 00:13:33.256 }, 00:13:33.256 { 00:13:33.256 "method": "bdev_nvme_set_hotplug", 00:13:33.256 "params": { 00:13:33.256 "period_us": 100000, 00:13:33.256 "enable": false 00:13:33.256 } 00:13:33.256 }, 00:13:33.256 { 00:13:33.256 "method": "bdev_wait_for_examine" 00:13:33.256 } 00:13:33.256 ] 00:13:33.256 }, 00:13:33.256 { 00:13:33.256 "subsystem": "nbd", 00:13:33.256 "config": [] 00:13:33.256 } 00:13:33.256 ] 00:13:33.256 }' 00:13:33.256 13:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:33.256 13:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:33.256 [2024-12-06 13:52:32.433901] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:13:33.256 [2024-12-06 13:52:32.434027] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72035 ] 00:13:33.256 [2024-12-06 13:52:32.576839] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:33.256 [2024-12-06 13:52:32.629611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:33.515 [2024-12-06 13:52:32.763271] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:33.515 [2024-12-06 13:52:32.810699] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:34.084 13:52:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:34.084 13:52:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:34.084 13:52:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:13:34.343 Running I/O for 10 seconds... 00:13:36.220 4480.00 IOPS, 17.50 MiB/s [2024-12-06T13:52:36.996Z] 4435.50 IOPS, 17.33 MiB/s [2024-12-06T13:52:37.932Z] 4492.33 IOPS, 17.55 MiB/s [2024-12-06T13:52:38.869Z] 4513.25 IOPS, 17.63 MiB/s [2024-12-06T13:52:39.803Z] 4515.60 IOPS, 17.64 MiB/s [2024-12-06T13:52:40.737Z] 4551.67 IOPS, 17.78 MiB/s [2024-12-06T13:52:41.670Z] 4566.57 IOPS, 17.84 MiB/s [2024-12-06T13:52:43.048Z] 4573.38 IOPS, 17.86 MiB/s [2024-12-06T13:52:43.616Z] 4586.11 IOPS, 17.91 MiB/s [2024-12-06T13:52:43.875Z] 4591.70 IOPS, 17.94 MiB/s 00:13:44.471 Latency(us) 00:13:44.471 [2024-12-06T13:52:43.875Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:44.471 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:13:44.471 Verification LBA range: start 0x0 length 0x2000 00:13:44.471 TLSTESTn1 : 10.01 4598.07 17.96 0.00 0.00 27792.21 4676.89 26333.56 00:13:44.472 [2024-12-06T13:52:43.876Z] =================================================================================================================== 00:13:44.472 [2024-12-06T13:52:43.876Z] Total : 4598.07 17.96 0.00 0.00 27792.21 4676.89 26333.56 00:13:44.472 { 00:13:44.472 "results": [ 00:13:44.472 { 00:13:44.472 "job": "TLSTESTn1", 00:13:44.472 "core_mask": "0x4", 00:13:44.472 "workload": "verify", 00:13:44.472 "status": "finished", 00:13:44.472 "verify_range": { 00:13:44.472 "start": 0, 00:13:44.472 "length": 8192 00:13:44.472 }, 00:13:44.472 "queue_depth": 128, 00:13:44.472 "io_size": 4096, 00:13:44.472 "runtime": 10.013757, 00:13:44.472 "iops": 4598.074429008014, 00:13:44.472 "mibps": 17.961228238312554, 00:13:44.472 "io_failed": 0, 00:13:44.472 "io_timeout": 0, 00:13:44.472 "avg_latency_us": 27792.214809233854, 00:13:44.472 "min_latency_us": 4676.887272727273, 00:13:44.472 "max_latency_us": 26333.556363636362 00:13:44.472 } 00:13:44.472 ], 00:13:44.472 "core_count": 1 00:13:44.472 } 00:13:44.472 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:44.472 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 72035 00:13:44.472 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72035 ']' 00:13:44.472 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72035 00:13:44.472 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:44.472 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:44.472 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72035 00:13:44.472 killing process with pid 72035 00:13:44.472 Received shutdown signal, test time was about 10.000000 seconds 00:13:44.472 00:13:44.472 Latency(us) 00:13:44.472 [2024-12-06T13:52:43.876Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:44.472 [2024-12-06T13:52:43.876Z] =================================================================================================================== 00:13:44.472 [2024-12-06T13:52:43.876Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:44.472 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:13:44.472 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:13:44.472 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72035' 00:13:44.472 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72035 00:13:44.472 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72035 00:13:44.731 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 72003 00:13:44.731 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72003 ']' 00:13:44.731 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72003 00:13:44.731 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:44.731 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:44.731 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72003 00:13:44.731 killing process with pid 72003 00:13:44.731 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:44.731 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:44.731 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72003' 00:13:44.731 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72003 00:13:44.731 13:52:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72003 00:13:44.731 13:52:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:13:44.731 13:52:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:44.731 13:52:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:44.731 13:52:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:44.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:44.731 13:52:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72168 00:13:44.731 13:52:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:13:44.731 13:52:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72168 00:13:44.731 13:52:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72168 ']' 00:13:44.731 13:52:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:44.731 13:52:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:44.731 13:52:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:44.731 13:52:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:44.731 13:52:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:44.991 [2024-12-06 13:52:44.169216] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:13:44.991 [2024-12-06 13:52:44.169303] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:44.991 [2024-12-06 13:52:44.317695] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:44.991 [2024-12-06 13:52:44.384320] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:44.991 [2024-12-06 13:52:44.384737] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:44.991 [2024-12-06 13:52:44.384775] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:44.991 [2024-12-06 13:52:44.384788] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:44.991 [2024-12-06 13:52:44.384797] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:44.991 [2024-12-06 13:52:44.385363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:45.250 [2024-12-06 13:52:44.461144] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:45.825 13:52:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:45.825 13:52:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:45.825 13:52:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:45.825 13:52:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:45.825 13:52:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:45.825 13:52:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:45.825 13:52:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.4sBmVr7jSK 00:13:45.825 13:52:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.4sBmVr7jSK 00:13:45.826 13:52:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:46.089 [2024-12-06 13:52:45.344236] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:46.089 13:52:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:46.348 13:52:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:13:46.607 [2024-12-06 13:52:45.792341] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:46.607 [2024-12-06 13:52:45.792955] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:46.607 13:52:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:46.866 malloc0 00:13:46.866 13:52:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:47.126 13:52:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.4sBmVr7jSK 00:13:47.385 13:52:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:13:47.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:47.385 13:52:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=72229 00:13:47.386 13:52:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:47.386 13:52:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:13:47.386 13:52:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 72229 /var/tmp/bdevperf.sock 00:13:47.386 13:52:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72229 ']' 00:13:47.386 13:52:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:47.386 13:52:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:47.386 13:52:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:47.386 13:52:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:47.386 13:52:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:47.645 [2024-12-06 13:52:46.823720] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:13:47.645 [2024-12-06 13:52:46.824459] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72229 ] 00:13:47.645 [2024-12-06 13:52:46.960987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:47.645 [2024-12-06 13:52:47.000403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:47.963 [2024-12-06 13:52:47.053026] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:47.963 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:47.963 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:47.963 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.4sBmVr7jSK 00:13:47.963 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:13:48.237 [2024-12-06 13:52:47.579883] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:48.497 nvme0n1 00:13:48.497 13:52:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:48.497 Running I/O for 1 seconds... 00:13:49.435 4662.00 IOPS, 18.21 MiB/s 00:13:49.435 Latency(us) 00:13:49.435 [2024-12-06T13:52:48.839Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:49.435 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:49.435 Verification LBA range: start 0x0 length 0x2000 00:13:49.435 nvme0n1 : 1.01 4720.83 18.44 0.00 0.00 26909.96 4289.63 23950.43 00:13:49.435 [2024-12-06T13:52:48.839Z] =================================================================================================================== 00:13:49.435 [2024-12-06T13:52:48.839Z] Total : 4720.83 18.44 0.00 0.00 26909.96 4289.63 23950.43 00:13:49.435 { 00:13:49.435 "results": [ 00:13:49.435 { 00:13:49.435 "job": "nvme0n1", 00:13:49.435 "core_mask": "0x2", 00:13:49.435 "workload": "verify", 00:13:49.435 "status": "finished", 00:13:49.435 "verify_range": { 00:13:49.435 "start": 0, 00:13:49.435 "length": 8192 00:13:49.435 }, 00:13:49.435 "queue_depth": 128, 00:13:49.435 "io_size": 4096, 00:13:49.435 "runtime": 1.014863, 00:13:49.435 "iops": 4720.8342406807615, 00:13:49.435 "mibps": 18.440758752659224, 00:13:49.435 "io_failed": 0, 00:13:49.435 "io_timeout": 0, 00:13:49.435 "avg_latency_us": 26909.95755222861, 00:13:49.435 "min_latency_us": 4289.629090909091, 00:13:49.435 "max_latency_us": 23950.429090909092 00:13:49.435 } 00:13:49.435 ], 00:13:49.435 "core_count": 1 00:13:49.435 } 00:13:49.435 13:52:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 72229 00:13:49.435 13:52:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72229 ']' 00:13:49.435 13:52:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72229 00:13:49.435 13:52:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:49.435 13:52:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:49.435 13:52:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72229 00:13:49.694 13:52:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:49.694 13:52:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:49.694 13:52:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72229' 00:13:49.694 killing process with pid 72229 00:13:49.694 Received shutdown signal, test time was about 1.000000 seconds 00:13:49.694 00:13:49.694 Latency(us) 00:13:49.694 [2024-12-06T13:52:49.098Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:49.694 [2024-12-06T13:52:49.098Z] =================================================================================================================== 00:13:49.694 [2024-12-06T13:52:49.098Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:49.694 13:52:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72229 00:13:49.694 13:52:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72229 00:13:49.694 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 72168 00:13:49.694 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72168 ']' 00:13:49.694 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72168 00:13:49.694 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:49.694 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:49.694 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72168 00:13:49.694 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:49.694 killing process with pid 72168 00:13:49.694 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:49.694 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72168' 00:13:49.694 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72168 00:13:49.695 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72168 00:13:49.954 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:13:49.954 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:49.954 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:49.954 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:49.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:49.954 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72268 00:13:49.954 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:13:49.954 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72268 00:13:49.954 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72268 ']' 00:13:49.954 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:49.954 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:49.954 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:49.954 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:49.954 13:52:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:50.214 [2024-12-06 13:52:49.409480] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:13:50.214 [2024-12-06 13:52:49.409753] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:50.214 [2024-12-06 13:52:49.553820] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:50.473 [2024-12-06 13:52:49.622052] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:50.473 [2024-12-06 13:52:49.622428] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:50.473 [2024-12-06 13:52:49.622447] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:50.473 [2024-12-06 13:52:49.622457] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:50.473 [2024-12-06 13:52:49.622464] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:50.473 [2024-12-06 13:52:49.622904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:50.473 [2024-12-06 13:52:49.692999] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:51.042 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:51.042 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:51.042 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:51.042 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:51.042 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:51.042 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:51.042 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:13:51.042 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.042 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:51.042 [2024-12-06 13:52:50.368153] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:51.042 malloc0 00:13:51.042 [2024-12-06 13:52:50.402076] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:51.042 [2024-12-06 13:52:50.402323] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:51.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:51.042 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.042 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=72300 00:13:51.042 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:13:51.042 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 72300 /var/tmp/bdevperf.sock 00:13:51.042 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72300 ']' 00:13:51.042 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:51.042 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:51.042 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:51.042 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:51.042 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:51.301 [2024-12-06 13:52:50.488271] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:13:51.301 [2024-12-06 13:52:50.488543] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72300 ] 00:13:51.301 [2024-12-06 13:52:50.635872] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:51.301 [2024-12-06 13:52:50.677884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:51.560 [2024-12-06 13:52:50.733837] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:51.560 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:51.560 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:51.560 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.4sBmVr7jSK 00:13:51.819 13:52:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:13:52.078 [2024-12-06 13:52:51.277650] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:52.078 nvme0n1 00:13:52.078 13:52:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:52.078 Running I/O for 1 seconds... 00:13:53.458 4527.00 IOPS, 17.68 MiB/s 00:13:53.458 Latency(us) 00:13:53.458 [2024-12-06T13:52:52.862Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:53.458 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:53.458 Verification LBA range: start 0x0 length 0x2000 00:13:53.458 nvme0n1 : 1.01 4593.02 17.94 0.00 0.00 27659.38 4200.26 20494.89 00:13:53.458 [2024-12-06T13:52:52.862Z] =================================================================================================================== 00:13:53.458 [2024-12-06T13:52:52.862Z] Total : 4593.02 17.94 0.00 0.00 27659.38 4200.26 20494.89 00:13:53.458 { 00:13:53.458 "results": [ 00:13:53.458 { 00:13:53.458 "job": "nvme0n1", 00:13:53.458 "core_mask": "0x2", 00:13:53.458 "workload": "verify", 00:13:53.458 "status": "finished", 00:13:53.458 "verify_range": { 00:13:53.458 "start": 0, 00:13:53.458 "length": 8192 00:13:53.458 }, 00:13:53.458 "queue_depth": 128, 00:13:53.458 "io_size": 4096, 00:13:53.458 "runtime": 1.013494, 00:13:53.458 "iops": 4593.021764312369, 00:13:53.458 "mibps": 17.94149126684519, 00:13:53.458 "io_failed": 0, 00:13:53.458 "io_timeout": 0, 00:13:53.458 "avg_latency_us": 27659.378888780393, 00:13:53.458 "min_latency_us": 4200.261818181818, 00:13:53.458 "max_latency_us": 20494.894545454546 00:13:53.458 } 00:13:53.458 ], 00:13:53.458 "core_count": 1 00:13:53.458 } 00:13:53.458 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:13:53.458 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.458 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:53.458 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.458 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:13:53.458 "subsystems": [ 00:13:53.458 { 00:13:53.458 "subsystem": "keyring", 00:13:53.458 "config": [ 00:13:53.458 { 00:13:53.458 "method": "keyring_file_add_key", 00:13:53.458 "params": { 00:13:53.458 "name": "key0", 00:13:53.458 "path": "/tmp/tmp.4sBmVr7jSK" 00:13:53.458 } 00:13:53.458 } 00:13:53.458 ] 00:13:53.458 }, 00:13:53.458 { 00:13:53.458 "subsystem": "iobuf", 00:13:53.458 "config": [ 00:13:53.458 { 00:13:53.458 "method": "iobuf_set_options", 00:13:53.458 "params": { 00:13:53.458 "small_pool_count": 8192, 00:13:53.458 "large_pool_count": 1024, 00:13:53.458 "small_bufsize": 8192, 00:13:53.458 "large_bufsize": 135168, 00:13:53.458 "enable_numa": false 00:13:53.458 } 00:13:53.458 } 00:13:53.458 ] 00:13:53.458 }, 00:13:53.458 { 00:13:53.458 "subsystem": "sock", 00:13:53.458 "config": [ 00:13:53.458 { 00:13:53.458 "method": "sock_set_default_impl", 00:13:53.458 "params": { 00:13:53.458 "impl_name": "uring" 00:13:53.458 } 00:13:53.458 }, 00:13:53.458 { 00:13:53.458 "method": "sock_impl_set_options", 00:13:53.458 "params": { 00:13:53.458 "impl_name": "ssl", 00:13:53.458 "recv_buf_size": 4096, 00:13:53.458 "send_buf_size": 4096, 00:13:53.458 "enable_recv_pipe": true, 00:13:53.458 "enable_quickack": false, 00:13:53.458 "enable_placement_id": 0, 00:13:53.458 "enable_zerocopy_send_server": true, 00:13:53.458 "enable_zerocopy_send_client": false, 00:13:53.458 "zerocopy_threshold": 0, 00:13:53.458 "tls_version": 0, 00:13:53.458 "enable_ktls": false 00:13:53.458 } 00:13:53.458 }, 00:13:53.458 { 00:13:53.458 "method": "sock_impl_set_options", 00:13:53.458 "params": { 00:13:53.458 "impl_name": "posix", 00:13:53.458 "recv_buf_size": 2097152, 00:13:53.458 "send_buf_size": 2097152, 00:13:53.458 "enable_recv_pipe": true, 00:13:53.458 "enable_quickack": false, 00:13:53.458 "enable_placement_id": 0, 00:13:53.458 "enable_zerocopy_send_server": true, 00:13:53.458 "enable_zerocopy_send_client": false, 00:13:53.458 "zerocopy_threshold": 0, 00:13:53.458 "tls_version": 0, 00:13:53.458 "enable_ktls": false 00:13:53.458 } 00:13:53.458 }, 00:13:53.458 { 00:13:53.458 "method": "sock_impl_set_options", 00:13:53.458 "params": { 00:13:53.458 "impl_name": "uring", 00:13:53.458 "recv_buf_size": 2097152, 00:13:53.458 "send_buf_size": 2097152, 00:13:53.458 "enable_recv_pipe": true, 00:13:53.458 "enable_quickack": false, 00:13:53.458 "enable_placement_id": 0, 00:13:53.458 "enable_zerocopy_send_server": false, 00:13:53.458 "enable_zerocopy_send_client": false, 00:13:53.458 "zerocopy_threshold": 0, 00:13:53.458 "tls_version": 0, 00:13:53.458 "enable_ktls": false 00:13:53.458 } 00:13:53.458 } 00:13:53.458 ] 00:13:53.458 }, 00:13:53.458 { 00:13:53.458 "subsystem": "vmd", 00:13:53.458 "config": [] 00:13:53.458 }, 00:13:53.458 { 00:13:53.458 "subsystem": "accel", 00:13:53.458 "config": [ 00:13:53.458 { 00:13:53.458 "method": "accel_set_options", 00:13:53.458 "params": { 00:13:53.458 "small_cache_size": 128, 00:13:53.458 "large_cache_size": 16, 00:13:53.458 "task_count": 2048, 00:13:53.458 "sequence_count": 2048, 00:13:53.458 "buf_count": 2048 00:13:53.458 } 00:13:53.458 } 00:13:53.458 ] 00:13:53.458 }, 00:13:53.458 { 00:13:53.458 "subsystem": "bdev", 00:13:53.458 "config": [ 00:13:53.458 { 00:13:53.458 "method": "bdev_set_options", 00:13:53.458 "params": { 00:13:53.458 "bdev_io_pool_size": 65535, 00:13:53.458 "bdev_io_cache_size": 256, 00:13:53.458 "bdev_auto_examine": true, 00:13:53.458 "iobuf_small_cache_size": 128, 00:13:53.458 "iobuf_large_cache_size": 16 00:13:53.458 } 00:13:53.458 }, 00:13:53.458 { 00:13:53.458 "method": "bdev_raid_set_options", 00:13:53.458 "params": { 00:13:53.458 "process_window_size_kb": 1024, 00:13:53.458 "process_max_bandwidth_mb_sec": 0 00:13:53.458 } 00:13:53.458 }, 00:13:53.458 { 00:13:53.458 "method": "bdev_iscsi_set_options", 00:13:53.458 "params": { 00:13:53.458 "timeout_sec": 30 00:13:53.458 } 00:13:53.458 }, 00:13:53.458 { 00:13:53.458 "method": "bdev_nvme_set_options", 00:13:53.458 "params": { 00:13:53.458 "action_on_timeout": "none", 00:13:53.459 "timeout_us": 0, 00:13:53.459 "timeout_admin_us": 0, 00:13:53.459 "keep_alive_timeout_ms": 10000, 00:13:53.459 "arbitration_burst": 0, 00:13:53.459 "low_priority_weight": 0, 00:13:53.459 "medium_priority_weight": 0, 00:13:53.459 "high_priority_weight": 0, 00:13:53.459 "nvme_adminq_poll_period_us": 10000, 00:13:53.459 "nvme_ioq_poll_period_us": 0, 00:13:53.459 "io_queue_requests": 0, 00:13:53.459 "delay_cmd_submit": true, 00:13:53.459 "transport_retry_count": 4, 00:13:53.459 "bdev_retry_count": 3, 00:13:53.459 "transport_ack_timeout": 0, 00:13:53.459 "ctrlr_loss_timeout_sec": 0, 00:13:53.459 "reconnect_delay_sec": 0, 00:13:53.459 "fast_io_fail_timeout_sec": 0, 00:13:53.459 "disable_auto_failback": false, 00:13:53.459 "generate_uuids": false, 00:13:53.459 "transport_tos": 0, 00:13:53.459 "nvme_error_stat": false, 00:13:53.459 "rdma_srq_size": 0, 00:13:53.459 "io_path_stat": false, 00:13:53.459 "allow_accel_sequence": false, 00:13:53.459 "rdma_max_cq_size": 0, 00:13:53.459 "rdma_cm_event_timeout_ms": 0, 00:13:53.459 "dhchap_digests": [ 00:13:53.459 "sha256", 00:13:53.459 "sha384", 00:13:53.459 "sha512" 00:13:53.459 ], 00:13:53.459 "dhchap_dhgroups": [ 00:13:53.459 "null", 00:13:53.459 "ffdhe2048", 00:13:53.459 "ffdhe3072", 00:13:53.459 "ffdhe4096", 00:13:53.459 "ffdhe6144", 00:13:53.459 "ffdhe8192" 00:13:53.459 ] 00:13:53.459 } 00:13:53.459 }, 00:13:53.459 { 00:13:53.459 "method": "bdev_nvme_set_hotplug", 00:13:53.459 "params": { 00:13:53.459 "period_us": 100000, 00:13:53.459 "enable": false 00:13:53.459 } 00:13:53.459 }, 00:13:53.459 { 00:13:53.459 "method": "bdev_malloc_create", 00:13:53.459 "params": { 00:13:53.459 "name": "malloc0", 00:13:53.459 "num_blocks": 8192, 00:13:53.459 "block_size": 4096, 00:13:53.459 "physical_block_size": 4096, 00:13:53.459 "uuid": "600ee65f-ece0-4c14-96e7-e484eaf4b095", 00:13:53.459 "optimal_io_boundary": 0, 00:13:53.459 "md_size": 0, 00:13:53.459 "dif_type": 0, 00:13:53.459 "dif_is_head_of_md": false, 00:13:53.459 "dif_pi_format": 0 00:13:53.459 } 00:13:53.459 }, 00:13:53.459 { 00:13:53.459 "method": "bdev_wait_for_examine" 00:13:53.459 } 00:13:53.459 ] 00:13:53.459 }, 00:13:53.459 { 00:13:53.459 "subsystem": "nbd", 00:13:53.459 "config": [] 00:13:53.459 }, 00:13:53.459 { 00:13:53.459 "subsystem": "scheduler", 00:13:53.459 "config": [ 00:13:53.459 { 00:13:53.459 "method": "framework_set_scheduler", 00:13:53.459 "params": { 00:13:53.459 "name": "static" 00:13:53.459 } 00:13:53.459 } 00:13:53.459 ] 00:13:53.459 }, 00:13:53.459 { 00:13:53.459 "subsystem": "nvmf", 00:13:53.459 "config": [ 00:13:53.459 { 00:13:53.459 "method": "nvmf_set_config", 00:13:53.459 "params": { 00:13:53.459 "discovery_filter": "match_any", 00:13:53.459 "admin_cmd_passthru": { 00:13:53.459 "identify_ctrlr": false 00:13:53.459 }, 00:13:53.459 "dhchap_digests": [ 00:13:53.459 "sha256", 00:13:53.459 "sha384", 00:13:53.459 "sha512" 00:13:53.459 ], 00:13:53.459 "dhchap_dhgroups": [ 00:13:53.459 "null", 00:13:53.459 "ffdhe2048", 00:13:53.459 "ffdhe3072", 00:13:53.459 "ffdhe4096", 00:13:53.459 "ffdhe6144", 00:13:53.459 "ffdhe8192" 00:13:53.459 ] 00:13:53.459 } 00:13:53.459 }, 00:13:53.459 { 00:13:53.459 "method": "nvmf_set_max_subsystems", 00:13:53.459 "params": { 00:13:53.459 "max_subsystems": 1024 00:13:53.459 } 00:13:53.459 }, 00:13:53.459 { 00:13:53.459 "method": "nvmf_set_crdt", 00:13:53.459 "params": { 00:13:53.459 "crdt1": 0, 00:13:53.459 "crdt2": 0, 00:13:53.459 "crdt3": 0 00:13:53.459 } 00:13:53.459 }, 00:13:53.459 { 00:13:53.459 "method": "nvmf_create_transport", 00:13:53.459 "params": { 00:13:53.459 "trtype": "TCP", 00:13:53.459 "max_queue_depth": 128, 00:13:53.459 "max_io_qpairs_per_ctrlr": 127, 00:13:53.459 "in_capsule_data_size": 4096, 00:13:53.459 "max_io_size": 131072, 00:13:53.459 "io_unit_size": 131072, 00:13:53.459 "max_aq_depth": 128, 00:13:53.459 "num_shared_buffers": 511, 00:13:53.459 "buf_cache_size": 4294967295, 00:13:53.459 "dif_insert_or_strip": false, 00:13:53.459 "zcopy": false, 00:13:53.459 "c2h_success": false, 00:13:53.459 "sock_priority": 0, 00:13:53.459 "abort_timeout_sec": 1, 00:13:53.459 "ack_timeout": 0, 00:13:53.459 "data_wr_pool_size": 0 00:13:53.459 } 00:13:53.459 }, 00:13:53.459 { 00:13:53.459 "method": "nvmf_create_subsystem", 00:13:53.459 "params": { 00:13:53.459 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:53.459 "allow_any_host": false, 00:13:53.459 "serial_number": "00000000000000000000", 00:13:53.459 "model_number": "SPDK bdev Controller", 00:13:53.459 "max_namespaces": 32, 00:13:53.459 "min_cntlid": 1, 00:13:53.459 "max_cntlid": 65519, 00:13:53.459 "ana_reporting": false 00:13:53.459 } 00:13:53.459 }, 00:13:53.459 { 00:13:53.459 "method": "nvmf_subsystem_add_host", 00:13:53.459 "params": { 00:13:53.459 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:53.459 "host": "nqn.2016-06.io.spdk:host1", 00:13:53.459 "psk": "key0" 00:13:53.459 } 00:13:53.459 }, 00:13:53.459 { 00:13:53.459 "method": "nvmf_subsystem_add_ns", 00:13:53.459 "params": { 00:13:53.459 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:53.459 "namespace": { 00:13:53.459 "nsid": 1, 00:13:53.459 "bdev_name": "malloc0", 00:13:53.459 "nguid": "600EE65FECE04C1496E7E484EAF4B095", 00:13:53.459 "uuid": "600ee65f-ece0-4c14-96e7-e484eaf4b095", 00:13:53.459 "no_auto_visible": false 00:13:53.459 } 00:13:53.459 } 00:13:53.459 }, 00:13:53.459 { 00:13:53.459 "method": "nvmf_subsystem_add_listener", 00:13:53.459 "params": { 00:13:53.459 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:53.459 "listen_address": { 00:13:53.459 "trtype": "TCP", 00:13:53.459 "adrfam": "IPv4", 00:13:53.459 "traddr": "10.0.0.3", 00:13:53.459 "trsvcid": "4420" 00:13:53.459 }, 00:13:53.459 "secure_channel": false, 00:13:53.459 "sock_impl": "ssl" 00:13:53.459 } 00:13:53.459 } 00:13:53.459 ] 00:13:53.459 } 00:13:53.459 ] 00:13:53.459 }' 00:13:53.459 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:13:53.719 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:13:53.719 "subsystems": [ 00:13:53.719 { 00:13:53.719 "subsystem": "keyring", 00:13:53.719 "config": [ 00:13:53.719 { 00:13:53.719 "method": "keyring_file_add_key", 00:13:53.719 "params": { 00:13:53.719 "name": "key0", 00:13:53.719 "path": "/tmp/tmp.4sBmVr7jSK" 00:13:53.719 } 00:13:53.719 } 00:13:53.719 ] 00:13:53.719 }, 00:13:53.719 { 00:13:53.719 "subsystem": "iobuf", 00:13:53.719 "config": [ 00:13:53.719 { 00:13:53.719 "method": "iobuf_set_options", 00:13:53.719 "params": { 00:13:53.719 "small_pool_count": 8192, 00:13:53.719 "large_pool_count": 1024, 00:13:53.719 "small_bufsize": 8192, 00:13:53.720 "large_bufsize": 135168, 00:13:53.720 "enable_numa": false 00:13:53.720 } 00:13:53.720 } 00:13:53.720 ] 00:13:53.720 }, 00:13:53.720 { 00:13:53.720 "subsystem": "sock", 00:13:53.720 "config": [ 00:13:53.720 { 00:13:53.720 "method": "sock_set_default_impl", 00:13:53.720 "params": { 00:13:53.720 "impl_name": "uring" 00:13:53.720 } 00:13:53.720 }, 00:13:53.720 { 00:13:53.720 "method": "sock_impl_set_options", 00:13:53.720 "params": { 00:13:53.720 "impl_name": "ssl", 00:13:53.720 "recv_buf_size": 4096, 00:13:53.720 "send_buf_size": 4096, 00:13:53.720 "enable_recv_pipe": true, 00:13:53.720 "enable_quickack": false, 00:13:53.720 "enable_placement_id": 0, 00:13:53.720 "enable_zerocopy_send_server": true, 00:13:53.720 "enable_zerocopy_send_client": false, 00:13:53.720 "zerocopy_threshold": 0, 00:13:53.720 "tls_version": 0, 00:13:53.720 "enable_ktls": false 00:13:53.720 } 00:13:53.720 }, 00:13:53.720 { 00:13:53.720 "method": "sock_impl_set_options", 00:13:53.720 "params": { 00:13:53.720 "impl_name": "posix", 00:13:53.720 "recv_buf_size": 2097152, 00:13:53.720 "send_buf_size": 2097152, 00:13:53.720 "enable_recv_pipe": true, 00:13:53.720 "enable_quickack": false, 00:13:53.720 "enable_placement_id": 0, 00:13:53.720 "enable_zerocopy_send_server": true, 00:13:53.720 "enable_zerocopy_send_client": false, 00:13:53.720 "zerocopy_threshold": 0, 00:13:53.720 "tls_version": 0, 00:13:53.720 "enable_ktls": false 00:13:53.720 } 00:13:53.720 }, 00:13:53.720 { 00:13:53.720 "method": "sock_impl_set_options", 00:13:53.720 "params": { 00:13:53.720 "impl_name": "uring", 00:13:53.720 "recv_buf_size": 2097152, 00:13:53.720 "send_buf_size": 2097152, 00:13:53.720 "enable_recv_pipe": true, 00:13:53.720 "enable_quickack": false, 00:13:53.720 "enable_placement_id": 0, 00:13:53.720 "enable_zerocopy_send_server": false, 00:13:53.720 "enable_zerocopy_send_client": false, 00:13:53.720 "zerocopy_threshold": 0, 00:13:53.720 "tls_version": 0, 00:13:53.720 "enable_ktls": false 00:13:53.720 } 00:13:53.720 } 00:13:53.720 ] 00:13:53.720 }, 00:13:53.720 { 00:13:53.720 "subsystem": "vmd", 00:13:53.720 "config": [] 00:13:53.720 }, 00:13:53.720 { 00:13:53.720 "subsystem": "accel", 00:13:53.720 "config": [ 00:13:53.720 { 00:13:53.720 "method": "accel_set_options", 00:13:53.720 "params": { 00:13:53.720 "small_cache_size": 128, 00:13:53.720 "large_cache_size": 16, 00:13:53.720 "task_count": 2048, 00:13:53.720 "sequence_count": 2048, 00:13:53.720 "buf_count": 2048 00:13:53.720 } 00:13:53.720 } 00:13:53.720 ] 00:13:53.720 }, 00:13:53.720 { 00:13:53.720 "subsystem": "bdev", 00:13:53.720 "config": [ 00:13:53.720 { 00:13:53.720 "method": "bdev_set_options", 00:13:53.720 "params": { 00:13:53.720 "bdev_io_pool_size": 65535, 00:13:53.720 "bdev_io_cache_size": 256, 00:13:53.720 "bdev_auto_examine": true, 00:13:53.720 "iobuf_small_cache_size": 128, 00:13:53.720 "iobuf_large_cache_size": 16 00:13:53.720 } 00:13:53.720 }, 00:13:53.720 { 00:13:53.720 "method": "bdev_raid_set_options", 00:13:53.720 "params": { 00:13:53.720 "process_window_size_kb": 1024, 00:13:53.720 "process_max_bandwidth_mb_sec": 0 00:13:53.720 } 00:13:53.720 }, 00:13:53.720 { 00:13:53.720 "method": "bdev_iscsi_set_options", 00:13:53.720 "params": { 00:13:53.720 "timeout_sec": 30 00:13:53.720 } 00:13:53.720 }, 00:13:53.720 { 00:13:53.720 "method": "bdev_nvme_set_options", 00:13:53.720 "params": { 00:13:53.720 "action_on_timeout": "none", 00:13:53.720 "timeout_us": 0, 00:13:53.720 "timeout_admin_us": 0, 00:13:53.720 "keep_alive_timeout_ms": 10000, 00:13:53.720 "arbitration_burst": 0, 00:13:53.720 "low_priority_weight": 0, 00:13:53.720 "medium_priority_weight": 0, 00:13:53.720 "high_priority_weight": 0, 00:13:53.720 "nvme_adminq_poll_period_us": 10000, 00:13:53.720 "nvme_ioq_poll_period_us": 0, 00:13:53.720 "io_queue_requests": 512, 00:13:53.720 "delay_cmd_submit": true, 00:13:53.720 "transport_retry_count": 4, 00:13:53.720 "bdev_retry_count": 3, 00:13:53.720 "transport_ack_timeout": 0, 00:13:53.720 "ctrlr_loss_timeout_sec": 0, 00:13:53.720 "reconnect_delay_sec": 0, 00:13:53.720 "fast_io_fail_timeout_sec": 0, 00:13:53.720 "disable_auto_failback": false, 00:13:53.720 "generate_uuids": false, 00:13:53.720 "transport_tos": 0, 00:13:53.720 "nvme_error_stat": false, 00:13:53.720 "rdma_srq_size": 0, 00:13:53.720 "io_path_stat": false, 00:13:53.720 "allow_accel_sequence": false, 00:13:53.720 "rdma_max_cq_size": 0, 00:13:53.720 "rdma_cm_event_timeout_ms": 0, 00:13:53.720 "dhchap_digests": [ 00:13:53.720 "sha256", 00:13:53.720 "sha384", 00:13:53.720 "sha512" 00:13:53.720 ], 00:13:53.720 "dhchap_dhgroups": [ 00:13:53.720 "null", 00:13:53.720 "ffdhe2048", 00:13:53.720 "ffdhe3072", 00:13:53.720 "ffdhe4096", 00:13:53.720 "ffdhe6144", 00:13:53.720 "ffdhe8192" 00:13:53.720 ] 00:13:53.720 } 00:13:53.720 }, 00:13:53.720 { 00:13:53.720 "method": "bdev_nvme_attach_controller", 00:13:53.720 "params": { 00:13:53.720 "name": "nvme0", 00:13:53.720 "trtype": "TCP", 00:13:53.720 "adrfam": "IPv4", 00:13:53.720 "traddr": "10.0.0.3", 00:13:53.720 "trsvcid": "4420", 00:13:53.720 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:53.720 "prchk_reftag": false, 00:13:53.720 "prchk_guard": false, 00:13:53.720 "ctrlr_loss_timeout_sec": 0, 00:13:53.720 "reconnect_delay_sec": 0, 00:13:53.720 "fast_io_fail_timeout_sec": 0, 00:13:53.720 "psk": "key0", 00:13:53.720 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:53.720 "hdgst": false, 00:13:53.720 "ddgst": false, 00:13:53.720 "multipath": "multipath" 00:13:53.720 } 00:13:53.720 }, 00:13:53.720 { 00:13:53.720 "method": "bdev_nvme_set_hotplug", 00:13:53.720 "params": { 00:13:53.720 "period_us": 100000, 00:13:53.720 "enable": false 00:13:53.720 } 00:13:53.720 }, 00:13:53.720 { 00:13:53.720 "method": "bdev_enable_histogram", 00:13:53.720 "params": { 00:13:53.720 "name": "nvme0n1", 00:13:53.720 "enable": true 00:13:53.720 } 00:13:53.720 }, 00:13:53.720 { 00:13:53.720 "method": "bdev_wait_for_examine" 00:13:53.720 } 00:13:53.720 ] 00:13:53.720 }, 00:13:53.720 { 00:13:53.720 "subsystem": "nbd", 00:13:53.720 "config": [] 00:13:53.720 } 00:13:53.720 ] 00:13:53.720 }' 00:13:53.720 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 72300 00:13:53.720 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72300 ']' 00:13:53.720 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72300 00:13:53.720 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:53.720 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:53.720 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72300 00:13:53.720 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:53.720 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:53.720 killing process with pid 72300 00:13:53.720 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72300' 00:13:53.720 Received shutdown signal, test time was about 1.000000 seconds 00:13:53.720 00:13:53.720 Latency(us) 00:13:53.720 [2024-12-06T13:52:53.124Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:53.720 [2024-12-06T13:52:53.124Z] =================================================================================================================== 00:13:53.720 [2024-12-06T13:52:53.124Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:53.720 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72300 00:13:53.720 13:52:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72300 00:13:53.981 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 72268 00:13:53.981 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72268 ']' 00:13:53.981 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72268 00:13:53.981 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:53.981 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:53.981 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72268 00:13:53.981 killing process with pid 72268 00:13:53.981 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:53.981 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:53.981 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72268' 00:13:53.981 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72268 00:13:53.981 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72268 00:13:54.240 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:13:54.240 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:54.240 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:54.241 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:13:54.241 "subsystems": [ 00:13:54.241 { 00:13:54.241 "subsystem": "keyring", 00:13:54.241 "config": [ 00:13:54.241 { 00:13:54.241 "method": "keyring_file_add_key", 00:13:54.241 "params": { 00:13:54.241 "name": "key0", 00:13:54.241 "path": "/tmp/tmp.4sBmVr7jSK" 00:13:54.241 } 00:13:54.241 } 00:13:54.241 ] 00:13:54.241 }, 00:13:54.241 { 00:13:54.241 "subsystem": "iobuf", 00:13:54.241 "config": [ 00:13:54.241 { 00:13:54.241 "method": "iobuf_set_options", 00:13:54.241 "params": { 00:13:54.241 "small_pool_count": 8192, 00:13:54.241 "large_pool_count": 1024, 00:13:54.241 "small_bufsize": 8192, 00:13:54.241 "large_bufsize": 135168, 00:13:54.241 "enable_numa": false 00:13:54.241 } 00:13:54.241 } 00:13:54.241 ] 00:13:54.241 }, 00:13:54.241 { 00:13:54.241 "subsystem": "sock", 00:13:54.241 "config": [ 00:13:54.241 { 00:13:54.241 "method": "sock_set_default_impl", 00:13:54.241 "params": { 00:13:54.241 "impl_name": "uring" 00:13:54.241 } 00:13:54.241 }, 00:13:54.241 { 00:13:54.241 "method": "sock_impl_set_options", 00:13:54.241 "params": { 00:13:54.241 "impl_name": "ssl", 00:13:54.241 "recv_buf_size": 4096, 00:13:54.241 "send_buf_size": 4096, 00:13:54.241 "enable_recv_pipe": true, 00:13:54.241 "enable_quickack": false, 00:13:54.241 "enable_placement_id": 0, 00:13:54.241 "enable_zerocopy_send_server": true, 00:13:54.241 "enable_zerocopy_send_client": false, 00:13:54.241 "zerocopy_threshold": 0, 00:13:54.241 "tls_version": 0, 00:13:54.241 "enable_ktls": false 00:13:54.241 } 00:13:54.241 }, 00:13:54.241 { 00:13:54.241 "method": "sock_impl_set_options", 00:13:54.241 "params": { 00:13:54.241 "impl_name": "posix", 00:13:54.241 "recv_buf_size": 2097152, 00:13:54.241 "send_buf_size": 2097152, 00:13:54.241 "enable_recv_pipe": true, 00:13:54.241 "enable_quickack": false, 00:13:54.241 "enable_placement_id": 0, 00:13:54.241 "enable_zerocopy_send_server": true, 00:13:54.241 "enable_zerocopy_send_client": false, 00:13:54.241 "zerocopy_threshold": 0, 00:13:54.241 "tls_version": 0, 00:13:54.241 "enable_ktls": false 00:13:54.241 } 00:13:54.241 }, 00:13:54.241 { 00:13:54.241 "method": "sock_impl_set_options", 00:13:54.241 "params": { 00:13:54.241 "impl_name": "uring", 00:13:54.241 "recv_buf_size": 2097152, 00:13:54.241 "send_buf_size": 2097152, 00:13:54.241 "enable_recv_pipe": true, 00:13:54.241 "enable_quickack": false, 00:13:54.241 "enable_placement_id": 0, 00:13:54.241 "enable_zerocopy_send_server": false, 00:13:54.241 "enable_zerocopy_send_client": false, 00:13:54.241 "zerocopy_threshold": 0, 00:13:54.241 "tls_version": 0, 00:13:54.241 "enable_ktls": false 00:13:54.241 } 00:13:54.241 } 00:13:54.241 ] 00:13:54.241 }, 00:13:54.241 { 00:13:54.241 "subsystem": "vmd", 00:13:54.241 "config": [] 00:13:54.241 }, 00:13:54.241 { 00:13:54.241 "subsystem": "accel", 00:13:54.241 "config": [ 00:13:54.241 { 00:13:54.241 "method": "accel_set_options", 00:13:54.241 "params": { 00:13:54.241 "small_cache_size": 128, 00:13:54.241 "large_cache_size": 16, 00:13:54.241 "task_count": 2048, 00:13:54.241 "sequence_count": 2048, 00:13:54.241 "buf_count": 2048 00:13:54.241 } 00:13:54.241 } 00:13:54.241 ] 00:13:54.241 }, 00:13:54.241 { 00:13:54.241 "subsystem": "bdev", 00:13:54.241 "config": [ 00:13:54.241 { 00:13:54.241 "method": "bdev_set_options", 00:13:54.241 "params": { 00:13:54.241 "bdev_io_pool_size": 65535, 00:13:54.241 "bdev_io_cache_size": 256, 00:13:54.241 "bdev_auto_examine": true, 00:13:54.241 "iobuf_small_cache_size": 128, 00:13:54.241 "iobuf_large_cache_size": 16 00:13:54.241 } 00:13:54.241 }, 00:13:54.241 { 00:13:54.241 "method": "bdev_raid_set_options", 00:13:54.241 "params": { 00:13:54.241 "process_window_size_kb": 1024, 00:13:54.241 "process_max_bandwidth_mb_sec": 0 00:13:54.241 } 00:13:54.241 }, 00:13:54.241 { 00:13:54.241 "method": "bdev_iscsi_set_options", 00:13:54.241 "params": { 00:13:54.241 "timeout_sec": 30 00:13:54.241 } 00:13:54.241 }, 00:13:54.241 { 00:13:54.241 "method": "bdev_nvme_set_options", 00:13:54.241 "params": { 00:13:54.241 "action_on_timeout": "none", 00:13:54.241 "timeout_us": 0, 00:13:54.241 "timeout_admin_us": 0, 00:13:54.241 "keep_alive_timeout_ms": 10000, 00:13:54.241 "arbitration_burst": 0, 00:13:54.241 "low_priority_weight": 0, 00:13:54.241 "medium_priority_weight": 0, 00:13:54.241 "high_priority_weight": 0, 00:13:54.241 "nvme_adminq_poll_period_us": 10000, 00:13:54.241 "nvme_ioq_poll_period_us": 0, 00:13:54.241 "io_queue_requests": 0, 00:13:54.241 "delay_cmd_submit": true, 00:13:54.241 "transport_retry_count": 4, 00:13:54.241 "bdev_retry_count": 3, 00:13:54.241 "transport_ack_timeout": 0, 00:13:54.241 "ctrlr_loss_timeout_sec": 0, 00:13:54.241 "reconnect_delay_sec": 0, 00:13:54.241 "fast_io_fail_timeout_sec": 0, 00:13:54.241 "disable_auto_failback": false, 00:13:54.241 "generate_uuids": false, 00:13:54.241 "transport_tos": 0, 00:13:54.241 "nvme_error_stat": false, 00:13:54.241 "rdma_srq_size": 0, 00:13:54.241 "io_path_stat": false, 00:13:54.241 "allow_accel_sequence": false, 00:13:54.241 "rdma_max_cq_size": 0, 00:13:54.241 "rdma_cm_event_timeout_ms": 0, 00:13:54.241 "dhchap_digests": [ 00:13:54.241 "sha256", 00:13:54.241 "sha384", 00:13:54.241 "sha512" 00:13:54.241 ], 00:13:54.241 "dhchap_dhgroups": [ 00:13:54.241 "null", 00:13:54.241 "ffdhe2048", 00:13:54.241 "ffdhe3072", 00:13:54.241 "ffdhe4096", 00:13:54.241 "ffdhe6144", 00:13:54.241 "ffdhe8192" 00:13:54.241 ] 00:13:54.241 } 00:13:54.241 }, 00:13:54.241 { 00:13:54.241 "method": "bdev_nvme_set_hotplug", 00:13:54.241 "params": { 00:13:54.241 "period_us": 100000, 00:13:54.241 "enable": false 00:13:54.242 } 00:13:54.242 }, 00:13:54.242 { 00:13:54.242 "method": "bdev_malloc_create", 00:13:54.242 "params": { 00:13:54.242 "name": "malloc0", 00:13:54.242 "num_blocks": 8192, 00:13:54.242 "block_size": 4096, 00:13:54.242 "physical_block_size": 4096, 00:13:54.242 "uuid": "600ee65f-ece0-4c14-96e7-e484eaf4b095", 00:13:54.242 "optimal_io_boundary": 0, 00:13:54.242 "md_size": 0, 00:13:54.242 "dif_type": 0, 00:13:54.242 "dif_is_head_of_md": false, 00:13:54.242 "dif_pi_format": 0 00:13:54.242 } 00:13:54.242 }, 00:13:54.242 { 00:13:54.242 "method": "bdev_wait_for_examine" 00:13:54.242 } 00:13:54.242 ] 00:13:54.242 }, 00:13:54.242 { 00:13:54.242 "subsystem": "nbd", 00:13:54.242 "config": [] 00:13:54.242 }, 00:13:54.242 { 00:13:54.242 "subsystem": "scheduler", 00:13:54.242 "config": [ 00:13:54.242 { 00:13:54.242 "method": "framework_set_scheduler", 00:13:54.242 "params": { 00:13:54.242 "name": "static" 00:13:54.242 } 00:13:54.242 } 00:13:54.242 ] 00:13:54.242 }, 00:13:54.242 { 00:13:54.242 "subsystem": "nvmf", 00:13:54.242 "config": [ 00:13:54.242 { 00:13:54.242 "method": "nvmf_set_config", 00:13:54.242 "params": { 00:13:54.242 "discovery_filter": "match_any", 00:13:54.242 "admin_cmd_passthru": { 00:13:54.242 "identify_ctrlr": false 00:13:54.242 }, 00:13:54.242 "dhchap_digests": [ 00:13:54.242 "sha256", 00:13:54.242 "sha384", 00:13:54.242 "sha512" 00:13:54.242 ], 00:13:54.242 "dhchap_dhgroups": [ 00:13:54.242 "null", 00:13:54.242 "ffdhe2048", 00:13:54.242 "ffdhe3072", 00:13:54.242 "ffdhe4096", 00:13:54.242 "ffdhe6144", 00:13:54.242 "ffdhe8192" 00:13:54.242 ] 00:13:54.242 } 00:13:54.242 }, 00:13:54.242 { 00:13:54.242 "method": "nvmf_set_max_subsystems", 00:13:54.242 "params": { 00:13:54.242 "max_subsystems": 1024 00:13:54.242 } 00:13:54.242 }, 00:13:54.242 { 00:13:54.242 "method": "nvmf_set_crdt", 00:13:54.242 "params": { 00:13:54.242 "crdt1": 0, 00:13:54.242 "crdt2": 0, 00:13:54.242 "crdt3": 0 00:13:54.242 } 00:13:54.242 }, 00:13:54.242 { 00:13:54.242 "method": "nvmf_create_transport", 00:13:54.242 "params": { 00:13:54.242 "trtype": "TCP", 00:13:54.242 "max_queue_depth": 128, 00:13:54.242 "max_io_qpairs_per_ctrlr": 127, 00:13:54.242 "in_capsule_data_size": 4096, 00:13:54.242 "max_io_size": 131072, 00:13:54.242 "io_unit_size": 131072, 00:13:54.242 "max_aq_depth": 128, 00:13:54.242 "num_shared_buffers": 511, 00:13:54.242 "buf_cache_size": 4294967295, 00:13:54.242 "dif_insert_or_strip": false, 00:13:54.242 "zcopy": false, 00:13:54.242 "c2h_success": false, 00:13:54.242 "sock_priority": 0, 00:13:54.242 "abort_timeout_sec": 1, 00:13:54.242 "ack_timeout": 0, 00:13:54.242 "data_wr_pool_size": 0 00:13:54.242 } 00:13:54.242 }, 00:13:54.242 { 00:13:54.242 "method": "nvmf_create_subsystem", 00:13:54.242 "params": { 00:13:54.242 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:54.242 "allow_any_host": false, 00:13:54.242 "serial_number": "00000000000000000000", 00:13:54.242 "model_number": "SPDK bdev Controller", 00:13:54.242 "max_namespaces": 32, 00:13:54.242 "min_cntlid": 1, 00:13:54.242 "max_cntlid": 65519, 00:13:54.242 "ana_reporting": false 00:13:54.242 } 00:13:54.242 }, 00:13:54.242 { 00:13:54.242 "method": "nvmf_subsystem_add_host", 00:13:54.242 "params": { 00:13:54.242 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:54.242 "host": "nqn.2016-06.io.spdk:host1", 00:13:54.242 "psk": "key0" 00:13:54.242 } 00:13:54.242 }, 00:13:54.242 { 00:13:54.242 "method": "nvmf_subsystem_add_ns", 00:13:54.242 "params": { 00:13:54.242 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:54.242 "namespace": { 00:13:54.242 "nsid": 1, 00:13:54.242 "bdev_name": "malloc0", 00:13:54.242 "nguid": "600EE65FECE04C1496E7E484EAF4B095", 00:13:54.242 "uuid": "600ee65f-ece0-4c14-96e7-e484eaf4b095", 00:13:54.242 "no_auto_visible": false 00:13:54.242 } 00:13:54.242 } 00:13:54.242 }, 00:13:54.242 { 00:13:54.242 "method": "nvmf_subsystem_add_listener", 00:13:54.242 "params": { 00:13:54.242 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:54.242 "listen_address": { 00:13:54.242 "trtype": "TCP", 00:13:54.242 "adrfam": "IPv4", 00:13:54.242 "traddr": "10.0.0.3", 00:13:54.242 "trsvcid": "4420" 00:13:54.242 }, 00:13:54.242 "secure_channel": false, 00:13:54.242 "sock_impl": "ssl" 00:13:54.242 } 00:13:54.242 } 00:13:54.242 ] 00:13:54.242 } 00:13:54.242 ] 00:13:54.242 }' 00:13:54.242 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:54.242 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72352 00:13:54.242 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:13:54.242 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72352 00:13:54.242 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72352 ']' 00:13:54.242 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:54.242 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:54.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:54.242 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:54.242 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:54.242 13:52:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:54.242 [2024-12-06 13:52:53.540085] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:13:54.242 [2024-12-06 13:52:53.540397] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:54.502 [2024-12-06 13:52:53.678324] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:54.502 [2024-12-06 13:52:53.731249] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:54.502 [2024-12-06 13:52:53.731316] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:54.502 [2024-12-06 13:52:53.731343] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:54.502 [2024-12-06 13:52:53.731350] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:54.502 [2024-12-06 13:52:53.731356] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:54.502 [2024-12-06 13:52:53.731867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:54.761 [2024-12-06 13:52:53.915124] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:54.761 [2024-12-06 13:52:54.008346] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:54.761 [2024-12-06 13:52:54.040306] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:54.761 [2024-12-06 13:52:54.040596] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:55.328 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:55.328 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:55.328 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:55.328 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:55.328 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:55.328 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:55.328 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=72380 00:13:55.328 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 72380 /var/tmp/bdevperf.sock 00:13:55.328 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72380 ']' 00:13:55.328 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:55.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:55.328 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:55.328 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:55.328 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:55.328 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:55.328 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:13:55.328 13:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:13:55.328 "subsystems": [ 00:13:55.328 { 00:13:55.328 "subsystem": "keyring", 00:13:55.328 "config": [ 00:13:55.328 { 00:13:55.328 "method": "keyring_file_add_key", 00:13:55.328 "params": { 00:13:55.328 "name": "key0", 00:13:55.328 "path": "/tmp/tmp.4sBmVr7jSK" 00:13:55.328 } 00:13:55.328 } 00:13:55.328 ] 00:13:55.328 }, 00:13:55.328 { 00:13:55.328 "subsystem": "iobuf", 00:13:55.328 "config": [ 00:13:55.328 { 00:13:55.328 "method": "iobuf_set_options", 00:13:55.328 "params": { 00:13:55.328 "small_pool_count": 8192, 00:13:55.328 "large_pool_count": 1024, 00:13:55.328 "small_bufsize": 8192, 00:13:55.328 "large_bufsize": 135168, 00:13:55.328 "enable_numa": false 00:13:55.328 } 00:13:55.328 } 00:13:55.328 ] 00:13:55.328 }, 00:13:55.328 { 00:13:55.328 "subsystem": "sock", 00:13:55.328 "config": [ 00:13:55.328 { 00:13:55.328 "method": "sock_set_default_impl", 00:13:55.328 "params": { 00:13:55.328 "impl_name": "uring" 00:13:55.328 } 00:13:55.328 }, 00:13:55.328 { 00:13:55.328 "method": "sock_impl_set_options", 00:13:55.328 "params": { 00:13:55.328 "impl_name": "ssl", 00:13:55.328 "recv_buf_size": 4096, 00:13:55.328 "send_buf_size": 4096, 00:13:55.328 "enable_recv_pipe": true, 00:13:55.328 "enable_quickack": false, 00:13:55.328 "enable_placement_id": 0, 00:13:55.328 "enable_zerocopy_send_server": true, 00:13:55.328 "enable_zerocopy_send_client": false, 00:13:55.328 "zerocopy_threshold": 0, 00:13:55.328 "tls_version": 0, 00:13:55.328 "enable_ktls": false 00:13:55.328 } 00:13:55.328 }, 00:13:55.328 { 00:13:55.328 "method": "sock_impl_set_options", 00:13:55.328 "params": { 00:13:55.328 "impl_name": "posix", 00:13:55.328 "recv_buf_size": 2097152, 00:13:55.328 "send_buf_size": 2097152, 00:13:55.328 "enable_recv_pipe": true, 00:13:55.328 "enable_quickack": false, 00:13:55.328 "enable_placement_id": 0, 00:13:55.328 "enable_zerocopy_send_server": true, 00:13:55.328 "enable_zerocopy_send_client": false, 00:13:55.328 "zerocopy_threshold": 0, 00:13:55.328 "tls_version": 0, 00:13:55.328 "enable_ktls": false 00:13:55.328 } 00:13:55.328 }, 00:13:55.328 { 00:13:55.328 "method": "sock_impl_set_options", 00:13:55.328 "params": { 00:13:55.328 "impl_name": "uring", 00:13:55.328 "recv_buf_size": 2097152, 00:13:55.328 "send_buf_size": 2097152, 00:13:55.328 "enable_recv_pipe": true, 00:13:55.328 "enable_quickack": false, 00:13:55.328 "enable_placement_id": 0, 00:13:55.328 "enable_zerocopy_send_server": false, 00:13:55.328 "enable_zerocopy_send_client": false, 00:13:55.328 "zerocopy_threshold": 0, 00:13:55.328 "tls_version": 0, 00:13:55.328 "enable_ktls": false 00:13:55.328 } 00:13:55.328 } 00:13:55.328 ] 00:13:55.328 }, 00:13:55.328 { 00:13:55.328 "subsystem": "vmd", 00:13:55.328 "config": [] 00:13:55.328 }, 00:13:55.328 { 00:13:55.328 "subsystem": "accel", 00:13:55.328 "config": [ 00:13:55.328 { 00:13:55.328 "method": "accel_set_options", 00:13:55.328 "params": { 00:13:55.328 "small_cache_size": 128, 00:13:55.328 "large_cache_size": 16, 00:13:55.328 "task_count": 2048, 00:13:55.328 "sequence_count": 2048, 00:13:55.328 "buf_count": 2048 00:13:55.328 } 00:13:55.328 } 00:13:55.328 ] 00:13:55.328 }, 00:13:55.328 { 00:13:55.328 "subsystem": "bdev", 00:13:55.328 "config": [ 00:13:55.328 { 00:13:55.328 "method": "bdev_set_options", 00:13:55.328 "params": { 00:13:55.328 "bdev_io_pool_size": 65535, 00:13:55.328 "bdev_io_cache_size": 256, 00:13:55.328 "bdev_auto_examine": true, 00:13:55.328 "iobuf_small_cache_size": 128, 00:13:55.328 "iobuf_large_cache_size": 16 00:13:55.328 } 00:13:55.328 }, 00:13:55.328 { 00:13:55.328 "method": "bdev_raid_set_options", 00:13:55.328 "params": { 00:13:55.328 "process_window_size_kb": 1024, 00:13:55.328 "process_max_bandwidth_mb_sec": 0 00:13:55.328 } 00:13:55.328 }, 00:13:55.328 { 00:13:55.328 "method": "bdev_iscsi_set_options", 00:13:55.328 "params": { 00:13:55.328 "timeout_sec": 30 00:13:55.328 } 00:13:55.328 }, 00:13:55.328 { 00:13:55.328 "method": "bdev_nvme_set_options", 00:13:55.328 "params": { 00:13:55.328 "action_on_timeout": "none", 00:13:55.328 "timeout_us": 0, 00:13:55.328 "timeout_admin_us": 0, 00:13:55.328 "keep_alive_timeout_ms": 10000, 00:13:55.328 "arbitration_burst": 0, 00:13:55.328 "low_priority_weight": 0, 00:13:55.328 "medium_priority_weight": 0, 00:13:55.328 "high_priority_weight": 0, 00:13:55.328 "nvme_adminq_poll_period_us": 10000, 00:13:55.328 "nvme_ioq_poll_period_us": 0, 00:13:55.328 "io_queue_requests": 512, 00:13:55.328 "delay_cmd_submit": true, 00:13:55.328 "transport_retry_count": 4, 00:13:55.328 "bdev_retry_count": 3, 00:13:55.329 "transport_ack_timeout": 0, 00:13:55.329 "ctrlr_loss_timeout_sec": 0, 00:13:55.329 "reconnect_delay_sec": 0, 00:13:55.329 "fast_io_fail_timeout_sec": 0, 00:13:55.329 "disable_auto_failback": false, 00:13:55.329 "generate_uuids": false, 00:13:55.329 "transport_tos": 0, 00:13:55.329 "nvme_error_stat": false, 00:13:55.329 "rdma_srq_size": 0, 00:13:55.329 "io_path_stat": false, 00:13:55.329 "allow_accel_sequence": false, 00:13:55.329 "rdma_max_cq_size": 0, 00:13:55.329 "rdma_cm_event_timeout_ms": 0, 00:13:55.329 "dhchap_digests": [ 00:13:55.329 "sha256", 00:13:55.329 "sha384", 00:13:55.329 "sha512" 00:13:55.329 ], 00:13:55.329 "dhchap_dhgroups": [ 00:13:55.329 "null", 00:13:55.329 "ffdhe2048", 00:13:55.329 "ffdhe3072", 00:13:55.329 "ffdhe4096", 00:13:55.329 "ffdhe6144", 00:13:55.329 "ffdhe8192" 00:13:55.329 ] 00:13:55.329 } 00:13:55.329 }, 00:13:55.329 { 00:13:55.329 "method": "bdev_nvme_attach_controller", 00:13:55.329 "params": { 00:13:55.329 "name": "nvme0", 00:13:55.329 "trtype": "TCP", 00:13:55.329 "adrfam": "IPv4", 00:13:55.329 "traddr": "10.0.0.3", 00:13:55.329 "trsvcid": "4420", 00:13:55.329 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:55.329 "prchk_reftag": false, 00:13:55.329 "prchk_guard": false, 00:13:55.329 "ctrlr_loss_timeout_sec": 0, 00:13:55.329 "reconnect_delay_sec": 0, 00:13:55.329 "fast_io_fail_timeout_sec": 0, 00:13:55.329 "psk": "key0", 00:13:55.329 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:55.329 "hdgst": false, 00:13:55.329 "ddgst": false, 00:13:55.329 "multipath": "multipath" 00:13:55.329 } 00:13:55.329 }, 00:13:55.329 { 00:13:55.329 "method": "bdev_nvme_set_hotplug", 00:13:55.329 "params": { 00:13:55.329 "period_us": 100000, 00:13:55.329 "enable": false 00:13:55.329 } 00:13:55.329 }, 00:13:55.329 { 00:13:55.329 "method": "bdev_enable_histogram", 00:13:55.329 "params": { 00:13:55.329 "name": "nvme0n1", 00:13:55.329 "enable": true 00:13:55.329 } 00:13:55.329 }, 00:13:55.329 { 00:13:55.329 "method": "bdev_wait_for_examine" 00:13:55.329 } 00:13:55.329 ] 00:13:55.329 }, 00:13:55.329 { 00:13:55.329 "subsystem": "nbd", 00:13:55.329 "config": [] 00:13:55.329 } 00:13:55.329 ] 00:13:55.329 }' 00:13:55.329 [2024-12-06 13:52:54.588489] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:13:55.329 [2024-12-06 13:52:54.588583] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72380 ] 00:13:55.329 [2024-12-06 13:52:54.728362] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:55.587 [2024-12-06 13:52:54.772192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:55.587 [2024-12-06 13:52:54.909792] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:55.587 [2024-12-06 13:52:54.965233] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:56.521 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:56.521 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:56.521 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:13:56.521 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:13:56.521 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:56.521 13:52:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:56.778 Running I/O for 1 seconds... 00:13:57.712 4599.00 IOPS, 17.96 MiB/s 00:13:57.712 Latency(us) 00:13:57.712 [2024-12-06T13:52:57.116Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:57.712 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:57.712 Verification LBA range: start 0x0 length 0x2000 00:13:57.712 nvme0n1 : 1.01 4660.63 18.21 0.00 0.00 27240.55 5153.51 22639.71 00:13:57.712 [2024-12-06T13:52:57.116Z] =================================================================================================================== 00:13:57.712 [2024-12-06T13:52:57.116Z] Total : 4660.63 18.21 0.00 0.00 27240.55 5153.51 22639.71 00:13:57.712 { 00:13:57.712 "results": [ 00:13:57.712 { 00:13:57.712 "job": "nvme0n1", 00:13:57.712 "core_mask": "0x2", 00:13:57.712 "workload": "verify", 00:13:57.713 "status": "finished", 00:13:57.713 "verify_range": { 00:13:57.713 "start": 0, 00:13:57.713 "length": 8192 00:13:57.713 }, 00:13:57.713 "queue_depth": 128, 00:13:57.713 "io_size": 4096, 00:13:57.713 "runtime": 1.01424, 00:13:57.713 "iops": 4660.632591891465, 00:13:57.713 "mibps": 18.205596062076037, 00:13:57.713 "io_failed": 0, 00:13:57.713 "io_timeout": 0, 00:13:57.713 "avg_latency_us": 27240.55131180645, 00:13:57.713 "min_latency_us": 5153.512727272728, 00:13:57.713 "max_latency_us": 22639.70909090909 00:13:57.713 } 00:13:57.713 ], 00:13:57.713 "core_count": 1 00:13:57.713 } 00:13:57.713 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:13:57.713 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:13:57.713 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:13:57.713 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:13:57.713 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:13:57.713 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:13:57.713 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:13:57.713 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:13:57.713 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:13:57.713 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:13:57.713 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:13:57.713 nvmf_trace.0 00:13:57.970 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:13:57.971 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 72380 00:13:57.971 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72380 ']' 00:13:57.971 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72380 00:13:57.971 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:57.971 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:57.971 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72380 00:13:57.971 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:57.971 killing process with pid 72380 00:13:57.971 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:57.971 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72380' 00:13:57.971 Received shutdown signal, test time was about 1.000000 seconds 00:13:57.971 00:13:57.971 Latency(us) 00:13:57.971 [2024-12-06T13:52:57.375Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:57.971 [2024-12-06T13:52:57.375Z] =================================================================================================================== 00:13:57.971 [2024-12-06T13:52:57.375Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:57.971 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72380 00:13:57.971 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72380 00:13:58.230 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:13:58.230 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:58.230 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:13:58.230 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:58.230 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:13:58.230 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:58.230 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:58.230 rmmod nvme_tcp 00:13:58.230 rmmod nvme_fabrics 00:13:58.230 rmmod nvme_keyring 00:13:58.230 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:58.230 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:13:58.230 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:13:58.230 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 72352 ']' 00:13:58.230 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 72352 00:13:58.230 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72352 ']' 00:13:58.230 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72352 00:13:58.230 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:58.230 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:58.230 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72352 00:13:58.230 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:58.230 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:58.230 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72352' 00:13:58.230 killing process with pid 72352 00:13:58.230 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72352 00:13:58.230 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72352 00:13:58.488 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:58.488 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:58.488 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:58.488 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:13:58.488 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:13:58.488 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:13:58.488 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:58.488 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:58.488 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:58.488 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:58.488 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:58.488 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:58.488 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:58.488 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:58.488 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:58.488 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:58.488 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:58.746 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:58.746 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:58.746 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:58.746 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:58.746 13:52:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:58.746 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:58.746 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:58.747 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:58.747 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:58.747 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0 00:13:58.747 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.xy9RQ1u4IV /tmp/tmp.2BEpBxTQwp /tmp/tmp.4sBmVr7jSK 00:13:58.747 00:13:58.747 real 1m25.616s 00:13:58.747 user 2m17.372s 00:13:58.747 sys 0m27.947s 00:13:58.747 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:58.747 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:58.747 ************************************ 00:13:58.747 END TEST nvmf_tls 00:13:58.747 ************************************ 00:13:58.747 13:52:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:13:58.747 13:52:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:58.747 13:52:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:58.747 13:52:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:58.747 ************************************ 00:13:58.747 START TEST nvmf_fips 00:13:58.747 ************************************ 00:13:58.747 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:13:59.005 * Looking for test storage... 00:13:59.005 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:13:59.005 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:59.005 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 00:13:59.005 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:59.005 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:59.005 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:59.005 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:59.005 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:59.005 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:13:59.005 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:13:59.005 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:13:59.005 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:13:59.005 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:13:59.005 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:13:59.005 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:13:59.005 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:59.005 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:13:59.005 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:13:59.005 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:59.005 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:59.005 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:13:59.005 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:13:59.005 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:59.005 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:13:59.005 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:13:59.005 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:13:59.005 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:13:59.005 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:59.005 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:13:59.005 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:13:59.005 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:59.005 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:59.005 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:13:59.005 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:59.005 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:59.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:59.005 --rc genhtml_branch_coverage=1 00:13:59.005 --rc genhtml_function_coverage=1 00:13:59.005 --rc genhtml_legend=1 00:13:59.006 --rc geninfo_all_blocks=1 00:13:59.006 --rc geninfo_unexecuted_blocks=1 00:13:59.006 00:13:59.006 ' 00:13:59.006 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:59.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:59.006 --rc genhtml_branch_coverage=1 00:13:59.006 --rc genhtml_function_coverage=1 00:13:59.006 --rc genhtml_legend=1 00:13:59.006 --rc geninfo_all_blocks=1 00:13:59.006 --rc geninfo_unexecuted_blocks=1 00:13:59.006 00:13:59.006 ' 00:13:59.006 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:59.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:59.006 --rc genhtml_branch_coverage=1 00:13:59.006 --rc genhtml_function_coverage=1 00:13:59.006 --rc genhtml_legend=1 00:13:59.006 --rc geninfo_all_blocks=1 00:13:59.006 --rc geninfo_unexecuted_blocks=1 00:13:59.006 00:13:59.006 ' 00:13:59.006 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:59.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:59.006 --rc genhtml_branch_coverage=1 00:13:59.006 --rc genhtml_function_coverage=1 00:13:59.006 --rc genhtml_legend=1 00:13:59.006 --rc geninfo_all_blocks=1 00:13:59.006 --rc geninfo_unexecuted_blocks=1 00:13:59.006 00:13:59.006 ' 00:13:59.006 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:59.006 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:13:59.006 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:59.006 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:59.006 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:59.006 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:59.006 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:59.006 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:59.006 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:59.006 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:59.006 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:59.006 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:59.006 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:13:59.006 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=cfa2def7-c8af-457f-82a0-b312efdea7f4 00:13:59.006 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:59.006 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:59.006 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:59.006 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:59.006 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:59.006 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:13:59.006 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:59.006 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:59.006 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:59.006 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.006 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.006 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.006 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:13:59.006 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.006 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:13:59.006 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:59.006 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:59.006 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:59.006 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:59.006 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:59.006 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:59.006 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:59.006 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:59.006 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:59.006 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:59.006 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:59.006 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:13:59.006 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:13:59.006 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:13:59.006 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:13:59.006 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:13:59.006 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:13:59.006 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:59.006 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:59.006 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:13:59.006 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:13:59.006 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:13:59.006 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:13:59.006 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:13:59.006 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:13:59.006 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:13:59.006 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:59.006 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:13:59.006 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:13:59.006 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:59.006 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:59.006 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:13:59.006 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:13:59.006 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:13:59.006 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:13:59.006 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:13:59.006 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:13:59.006 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:13:59.006 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:13:59.006 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:13:59.006 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:13:59.006 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:59.006 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:59.006 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:13:59.006 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:59.006 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:13:59.006 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:13:59.006 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:59.006 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:13:59.006 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:13:59.006 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:13:59.006 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:13:59.006 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:13:59.007 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:13:59.007 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:13:59.007 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:59.007 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:13:59.007 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:13:59.007 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:13:59.007 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:13:59.007 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:13:59.007 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:13:59.007 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:13:59.007 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:13:59.007 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:13:59.007 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:13:59.007 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:13:59.007 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:13:59.007 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:13:59.007 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:13:59.007 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:13:59.007 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:13:59.007 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:13:59.318 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:13:59.318 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:13:59.318 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:13:59.318 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:13:59.318 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:13:59.318 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:13:59.318 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:13:59.318 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:13:59.318 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:59.318 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:13:59.318 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:59.318 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:13:59.318 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:59.318 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:13:59.318 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:13:59.318 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:13:59.318 Error setting digest 00:13:59.318 4092C9CDA57F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:13:59.318 4092C9CDA57F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:13:59.318 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:13:59.318 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:59.318 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:59.318 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:59.318 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:13:59.318 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:59.318 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:59.318 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:59.318 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:59.318 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:59.318 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:59.318 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:59.318 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:59.318 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:59.318 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:59.318 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:59.318 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:59.318 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:59.318 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:59.318 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:59.318 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:59.318 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:59.318 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:59.318 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:59.318 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:59.318 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:59.318 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:59.318 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:59.318 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:59.318 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:59.318 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:59.318 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:59.318 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:59.318 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:59.318 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:59.318 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:59.318 Cannot find device "nvmf_init_br" 00:13:59.318 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:13:59.318 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:59.318 Cannot find device "nvmf_init_br2" 00:13:59.318 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:13:59.318 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:59.318 Cannot find device "nvmf_tgt_br" 00:13:59.318 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 00:13:59.318 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:59.318 Cannot find device "nvmf_tgt_br2" 00:13:59.318 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 00:13:59.318 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:59.318 Cannot find device "nvmf_init_br" 00:13:59.318 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 00:13:59.318 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:59.318 Cannot find device "nvmf_init_br2" 00:13:59.318 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 00:13:59.318 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:59.318 Cannot find device "nvmf_tgt_br" 00:13:59.318 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 00:13:59.318 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:59.318 Cannot find device "nvmf_tgt_br2" 00:13:59.318 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 00:13:59.318 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:59.318 Cannot find device "nvmf_br" 00:13:59.318 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 00:13:59.318 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:59.318 Cannot find device "nvmf_init_if" 00:13:59.318 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true 00:13:59.318 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:59.318 Cannot find device "nvmf_init_if2" 00:13:59.318 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true 00:13:59.318 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:59.318 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:59.318 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true 00:13:59.318 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:59.318 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:59.318 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true 00:13:59.318 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:59.318 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:59.318 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:59.318 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:59.318 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:59.318 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:59.318 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:59.318 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:59.318 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:59.318 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:59.318 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:59.318 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:59.318 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:59.318 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:59.318 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:59.318 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:59.577 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:59.577 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:59.577 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:59.577 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:59.577 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:59.577 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:59.577 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:59.577 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:59.577 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:59.577 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:59.577 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:59.577 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:59.577 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:59.577 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:59.577 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:59.577 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:59.577 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:59.577 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:59.577 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.093 ms 00:13:59.577 00:13:59.577 --- 10.0.0.3 ping statistics --- 00:13:59.577 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:59.577 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:13:59.577 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:59.577 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:59.577 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.059 ms 00:13:59.577 00:13:59.577 --- 10.0.0.4 ping statistics --- 00:13:59.577 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:59.577 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:13:59.577 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:59.577 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:59.577 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:13:59.577 00:13:59.577 --- 10.0.0.1 ping statistics --- 00:13:59.577 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:59.577 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:13:59.577 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:59.577 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:59.577 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:13:59.577 00:13:59.577 --- 10.0.0.2 ping statistics --- 00:13:59.577 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:59.577 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:13:59.578 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:59.578 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@461 -- # return 0 00:13:59.578 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:59.578 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:59.578 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:59.578 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:59.578 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:59.578 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:59.578 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:59.578 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:13:59.578 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:59.578 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:59.578 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:13:59.578 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=72706 00:13:59.578 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:59.578 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 72706 00:13:59.578 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 72706 ']' 00:13:59.578 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:59.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:59.578 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:59.578 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:59.578 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:59.578 13:52:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:13:59.836 [2024-12-06 13:52:58.979587] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:13:59.836 [2024-12-06 13:52:58.979683] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:59.836 [2024-12-06 13:52:59.135169] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:59.836 [2024-12-06 13:52:59.205527] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:59.836 [2024-12-06 13:52:59.205578] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:59.836 [2024-12-06 13:52:59.205592] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:59.836 [2024-12-06 13:52:59.205603] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:59.836 [2024-12-06 13:52:59.205612] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:59.836 [2024-12-06 13:52:59.206158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:00.094 [2024-12-06 13:52:59.264881] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:00.661 13:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:00.661 13:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:14:00.661 13:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:00.661 13:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:00.661 13:52:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:00.661 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:00.661 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:14:00.661 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:14:00.661 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:14:00.661 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.HyW 00:14:00.661 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:14:00.661 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.HyW 00:14:00.661 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.HyW 00:14:00.661 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.HyW 00:14:00.661 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:01.227 [2024-12-06 13:53:00.332522] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:01.227 [2024-12-06 13:53:00.348437] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:01.227 [2024-12-06 13:53:00.348663] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:01.227 malloc0 00:14:01.227 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:01.227 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=72748 00:14:01.228 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:01.228 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 72748 /var/tmp/bdevperf.sock 00:14:01.228 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 72748 ']' 00:14:01.228 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:01.228 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:01.228 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:01.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:01.228 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:01.228 13:53:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:01.228 [2024-12-06 13:53:00.503865] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:14:01.228 [2024-12-06 13:53:00.503956] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72748 ] 00:14:01.500 [2024-12-06 13:53:00.654891] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:01.500 [2024-12-06 13:53:00.715493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:01.500 [2024-12-06 13:53:00.769293] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:02.082 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:02.082 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:14:02.082 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.HyW 00:14:02.341 13:53:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:02.599 [2024-12-06 13:53:01.931437] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:02.858 TLSTESTn1 00:14:02.858 13:53:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:02.858 Running I/O for 10 seconds... 00:14:04.730 4162.00 IOPS, 16.26 MiB/s [2024-12-06T13:53:05.509Z] 4419.50 IOPS, 17.26 MiB/s [2024-12-06T13:53:06.442Z] 4503.00 IOPS, 17.59 MiB/s [2024-12-06T13:53:07.377Z] 4560.75 IOPS, 17.82 MiB/s [2024-12-06T13:53:08.315Z] 4612.60 IOPS, 18.02 MiB/s [2024-12-06T13:53:09.252Z] 4622.33 IOPS, 18.06 MiB/s [2024-12-06T13:53:10.188Z] 4633.71 IOPS, 18.10 MiB/s [2024-12-06T13:53:11.123Z] 4639.38 IOPS, 18.12 MiB/s [2024-12-06T13:53:12.500Z] 4628.22 IOPS, 18.08 MiB/s [2024-12-06T13:53:12.500Z] 4610.20 IOPS, 18.01 MiB/s 00:14:13.096 Latency(us) 00:14:13.096 [2024-12-06T13:53:12.500Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:13.096 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:13.096 Verification LBA range: start 0x0 length 0x2000 00:14:13.096 TLSTESTn1 : 10.02 4615.35 18.03 0.00 0.00 27684.40 5540.77 38606.66 00:14:13.096 [2024-12-06T13:53:12.500Z] =================================================================================================================== 00:14:13.096 [2024-12-06T13:53:12.500Z] Total : 4615.35 18.03 0.00 0.00 27684.40 5540.77 38606.66 00:14:13.096 { 00:14:13.096 "results": [ 00:14:13.096 { 00:14:13.096 "job": "TLSTESTn1", 00:14:13.096 "core_mask": "0x4", 00:14:13.096 "workload": "verify", 00:14:13.096 "status": "finished", 00:14:13.096 "verify_range": { 00:14:13.096 "start": 0, 00:14:13.096 "length": 8192 00:14:13.096 }, 00:14:13.096 "queue_depth": 128, 00:14:13.096 "io_size": 4096, 00:14:13.096 "runtime": 10.01549, 00:14:13.096 "iops": 4615.350821577376, 00:14:13.096 "mibps": 18.028714146786626, 00:14:13.096 "io_failed": 0, 00:14:13.096 "io_timeout": 0, 00:14:13.096 "avg_latency_us": 27684.396098608584, 00:14:13.096 "min_latency_us": 5540.770909090909, 00:14:13.096 "max_latency_us": 38606.66181818182 00:14:13.096 } 00:14:13.096 ], 00:14:13.096 "core_count": 1 00:14:13.096 } 00:14:13.096 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:14:13.096 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:14:13.096 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:14:13.096 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:14:13.096 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:14:13.096 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:13.096 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:14:13.096 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:14:13.096 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:14:13.096 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:13.096 nvmf_trace.0 00:14:13.096 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:14:13.096 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 72748 00:14:13.096 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 72748 ']' 00:14:13.096 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 72748 00:14:13.096 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:14:13.096 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:13.096 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72748 00:14:13.096 killing process with pid 72748 00:14:13.096 Received shutdown signal, test time was about 10.000000 seconds 00:14:13.096 00:14:13.096 Latency(us) 00:14:13.096 [2024-12-06T13:53:12.500Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:13.096 [2024-12-06T13:53:12.500Z] =================================================================================================================== 00:14:13.096 [2024-12-06T13:53:12.500Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:13.096 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:14:13.096 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:14:13.096 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72748' 00:14:13.096 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 72748 00:14:13.096 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 72748 00:14:13.096 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:14:13.096 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:13.096 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:14:13.355 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:13.355 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:14:13.355 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:13.355 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:13.355 rmmod nvme_tcp 00:14:13.355 rmmod nvme_fabrics 00:14:13.355 rmmod nvme_keyring 00:14:13.355 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:13.355 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:14:13.355 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:14:13.355 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 72706 ']' 00:14:13.355 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 72706 00:14:13.355 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 72706 ']' 00:14:13.355 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 72706 00:14:13.355 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:14:13.355 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:13.355 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72706 00:14:13.355 killing process with pid 72706 00:14:13.355 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:13.355 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:13.355 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72706' 00:14:13.355 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 72706 00:14:13.355 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 72706 00:14:13.615 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:13.615 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:13.615 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:13.615 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:14:13.615 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:14:13.615 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:14:13.615 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:13.615 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:13.615 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:13.615 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:13.615 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:13.615 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:13.615 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:13.615 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:13.615 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:13.615 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:13.615 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:13.615 13:53:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:13.874 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:13.874 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:13.874 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:13.874 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:13.874 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:13.874 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:13.874 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:13.874 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:13.874 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0 00:14:13.874 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.HyW 00:14:13.874 00:14:13.874 real 0m15.048s 00:14:13.874 user 0m20.950s 00:14:13.874 sys 0m5.688s 00:14:13.874 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:13.874 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:13.874 ************************************ 00:14:13.874 END TEST nvmf_fips 00:14:13.874 ************************************ 00:14:13.874 13:53:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:14:13.874 13:53:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:13.874 13:53:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:13.874 13:53:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:13.874 ************************************ 00:14:13.874 START TEST nvmf_control_msg_list 00:14:13.874 ************************************ 00:14:13.874 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:14:13.874 * Looking for test storage... 00:14:14.134 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:14.134 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:14.134 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:14.134 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 00:14:14.134 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:14.134 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:14.134 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:14.134 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:14.134 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:14:14.134 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:14:14.134 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:14:14.134 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:14:14.134 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:14:14.134 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:14:14.134 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:14:14.134 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:14.134 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:14:14.134 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:14:14.134 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:14.134 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:14.134 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:14:14.134 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:14:14.134 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:14.134 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:14:14.134 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:14:14.134 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:14:14.134 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:14:14.134 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:14.134 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:14:14.134 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:14:14.134 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:14.134 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:14.134 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:14:14.134 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:14.134 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:14.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:14.134 --rc genhtml_branch_coverage=1 00:14:14.134 --rc genhtml_function_coverage=1 00:14:14.134 --rc genhtml_legend=1 00:14:14.134 --rc geninfo_all_blocks=1 00:14:14.134 --rc geninfo_unexecuted_blocks=1 00:14:14.134 00:14:14.134 ' 00:14:14.134 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:14.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:14.134 --rc genhtml_branch_coverage=1 00:14:14.134 --rc genhtml_function_coverage=1 00:14:14.134 --rc genhtml_legend=1 00:14:14.134 --rc geninfo_all_blocks=1 00:14:14.134 --rc geninfo_unexecuted_blocks=1 00:14:14.134 00:14:14.134 ' 00:14:14.134 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:14.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:14.134 --rc genhtml_branch_coverage=1 00:14:14.134 --rc genhtml_function_coverage=1 00:14:14.134 --rc genhtml_legend=1 00:14:14.134 --rc geninfo_all_blocks=1 00:14:14.134 --rc geninfo_unexecuted_blocks=1 00:14:14.134 00:14:14.134 ' 00:14:14.134 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:14.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:14.134 --rc genhtml_branch_coverage=1 00:14:14.134 --rc genhtml_function_coverage=1 00:14:14.134 --rc genhtml_legend=1 00:14:14.134 --rc geninfo_all_blocks=1 00:14:14.134 --rc geninfo_unexecuted_blocks=1 00:14:14.134 00:14:14.134 ' 00:14:14.134 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:14.134 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:14:14.134 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:14.134 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:14.134 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:14.134 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:14.134 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:14.134 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:14.134 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:14.134 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:14.134 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:14.134 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:14.134 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:14:14.134 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=cfa2def7-c8af-457f-82a0-b312efdea7f4 00:14:14.134 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:14.134 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:14.134 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:14.134 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:14.134 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:14.134 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:14:14.134 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:14.134 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:14.134 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:14.135 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.135 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.135 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.135 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:14:14.135 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.135 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:14:14.135 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:14.135 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:14.135 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:14.135 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:14.135 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:14.135 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:14.135 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:14.135 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:14.135 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:14.135 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:14.135 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:14:14.135 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:14.135 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:14.135 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:14.135 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:14.135 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:14.135 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:14.135 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:14.135 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:14.135 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:14.135 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:14.135 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:14.135 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:14.135 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:14.135 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:14.135 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:14.135 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:14.135 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:14.135 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:14.135 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:14.135 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:14.135 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:14.135 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:14.135 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:14.135 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:14.135 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:14.135 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:14.135 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:14.135 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:14.135 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:14.135 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:14.135 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:14.135 Cannot find device "nvmf_init_br" 00:14:14.135 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 00:14:14.135 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:14.135 Cannot find device "nvmf_init_br2" 00:14:14.135 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 00:14:14.135 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:14.135 Cannot find device "nvmf_tgt_br" 00:14:14.135 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 00:14:14.135 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:14.135 Cannot find device "nvmf_tgt_br2" 00:14:14.135 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 00:14:14.135 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:14.135 Cannot find device "nvmf_init_br" 00:14:14.135 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 00:14:14.135 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:14.135 Cannot find device "nvmf_init_br2" 00:14:14.135 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 00:14:14.135 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:14.135 Cannot find device "nvmf_tgt_br" 00:14:14.135 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 00:14:14.135 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:14.135 Cannot find device "nvmf_tgt_br2" 00:14:14.135 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 00:14:14.135 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:14.135 Cannot find device "nvmf_br" 00:14:14.135 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 00:14:14.135 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:14.135 Cannot find device "nvmf_init_if" 00:14:14.135 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true 00:14:14.135 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:14.395 Cannot find device "nvmf_init_if2" 00:14:14.395 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true 00:14:14.395 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:14.395 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:14.395 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true 00:14:14.395 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:14.395 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:14.395 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true 00:14:14.395 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:14.395 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:14.395 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:14.395 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:14.395 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:14.395 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:14.395 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:14.395 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:14.395 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:14.395 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:14.395 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:14.395 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:14.395 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:14.395 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:14.395 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:14.395 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:14.395 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:14.395 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:14.395 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:14.395 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:14.395 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:14.395 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:14.395 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:14.395 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:14.395 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:14.395 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:14.654 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:14.654 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:14.654 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:14.654 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:14.654 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:14.654 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:14.654 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:14.654 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:14.654 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:14:14.654 00:14:14.654 --- 10.0.0.3 ping statistics --- 00:14:14.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:14.654 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:14:14.654 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:14.654 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:14.654 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.062 ms 00:14:14.654 00:14:14.654 --- 10.0.0.4 ping statistics --- 00:14:14.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:14.654 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:14:14.654 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:14.654 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:14.654 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:14:14.654 00:14:14.654 --- 10.0.0.1 ping statistics --- 00:14:14.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:14.654 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:14:14.654 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:14.654 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:14.654 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:14:14.654 00:14:14.654 --- 10.0.0.2 ping statistics --- 00:14:14.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:14.654 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:14:14.654 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:14.654 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@461 -- # return 0 00:14:14.654 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:14.655 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:14.655 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:14.655 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:14.655 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:14.655 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:14.655 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:14.655 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:14:14.655 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:14.655 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:14.655 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:14.655 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=73131 00:14:14.655 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 73131 00:14:14.655 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:14.655 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 73131 ']' 00:14:14.655 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:14.655 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:14.655 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:14.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:14.655 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:14.655 13:53:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:14.655 [2024-12-06 13:53:13.912861] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:14:14.655 [2024-12-06 13:53:13.912948] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:14.914 [2024-12-06 13:53:14.065576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:14.914 [2024-12-06 13:53:14.118273] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:14.914 [2024-12-06 13:53:14.118335] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:14.914 [2024-12-06 13:53:14.118349] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:14.914 [2024-12-06 13:53:14.118359] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:14.914 [2024-12-06 13:53:14.118367] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:14.914 [2024-12-06 13:53:14.118834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:14.914 [2024-12-06 13:53:14.175856] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:14.914 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:14.914 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:14:14.914 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:14.914 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:14.914 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:14.914 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:14.914 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:14:14.914 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:14:14.914 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:14:14.914 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.914 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:14.914 [2024-12-06 13:53:14.287052] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:14.914 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.914 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:14:14.914 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.914 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:14.914 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.914 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:14:14.914 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.914 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:14.914 Malloc0 00:14:14.914 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.914 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:14:14.914 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.914 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:15.174 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.174 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:14:15.174 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.174 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:15.174 [2024-12-06 13:53:14.326918] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:15.174 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.174 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=73161 00:14:15.174 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:14:15.174 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=73162 00:14:15.174 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:14:15.174 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=73163 00:14:15.174 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:14:15.174 13:53:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 73161 00:14:15.174 [2024-12-06 13:53:14.505156] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:15.174 [2024-12-06 13:53:14.515641] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:15.174 [2024-12-06 13:53:14.515814] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:16.571 Initializing NVMe Controllers 00:14:16.571 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:14:16.571 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:14:16.571 Initialization complete. Launching workers. 00:14:16.571 ======================================================== 00:14:16.571 Latency(us) 00:14:16.571 Device Information : IOPS MiB/s Average min max 00:14:16.571 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 3831.00 14.96 260.71 123.60 607.15 00:14:16.571 ======================================================== 00:14:16.571 Total : 3831.00 14.96 260.71 123.60 607.15 00:14:16.571 00:14:16.571 Initializing NVMe Controllers 00:14:16.571 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:14:16.571 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:14:16.571 Initialization complete. Launching workers. 00:14:16.571 ======================================================== 00:14:16.571 Latency(us) 00:14:16.571 Device Information : IOPS MiB/s Average min max 00:14:16.571 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 3834.00 14.98 260.44 154.21 488.84 00:14:16.571 ======================================================== 00:14:16.571 Total : 3834.00 14.98 260.44 154.21 488.84 00:14:16.571 00:14:16.571 Initializing NVMe Controllers 00:14:16.571 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:14:16.571 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:14:16.571 Initialization complete. Launching workers. 00:14:16.571 ======================================================== 00:14:16.571 Latency(us) 00:14:16.571 Device Information : IOPS MiB/s Average min max 00:14:16.571 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 3824.00 14.94 261.10 157.25 708.17 00:14:16.571 ======================================================== 00:14:16.571 Total : 3824.00 14.94 261.10 157.25 708.17 00:14:16.571 00:14:16.571 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 73162 00:14:16.571 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 73163 00:14:16.571 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:14:16.571 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:14:16.571 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:16.571 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:14:16.571 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:16.571 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:14:16.571 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:16.571 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:16.571 rmmod nvme_tcp 00:14:16.571 rmmod nvme_fabrics 00:14:16.571 rmmod nvme_keyring 00:14:16.571 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:16.571 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:14:16.571 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:14:16.571 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 73131 ']' 00:14:16.571 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 73131 00:14:16.571 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 73131 ']' 00:14:16.571 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 73131 00:14:16.571 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:14:16.571 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:16.571 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73131 00:14:16.571 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:16.571 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:16.571 killing process with pid 73131 00:14:16.571 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73131' 00:14:16.571 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 73131 00:14:16.571 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 73131 00:14:16.571 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:16.571 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:16.571 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:16.571 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:14:16.571 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:16.571 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:14:16.571 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:14:16.571 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:16.571 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:16.571 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:16.571 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:16.571 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:16.571 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:16.571 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:16.571 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:16.571 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:16.571 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:16.571 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:16.843 13:53:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:16.843 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:16.843 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:16.843 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:16.843 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:16.843 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:16.843 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:16.844 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:16.844 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0 00:14:16.844 00:14:16.844 real 0m2.908s 00:14:16.844 user 0m4.706s 00:14:16.844 sys 0m1.337s 00:14:16.844 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:16.844 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:16.844 ************************************ 00:14:16.844 END TEST nvmf_control_msg_list 00:14:16.844 ************************************ 00:14:16.844 13:53:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:14:16.844 13:53:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:16.844 13:53:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:16.844 13:53:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:16.844 ************************************ 00:14:16.844 START TEST nvmf_wait_for_buf 00:14:16.844 ************************************ 00:14:16.844 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:14:16.844 * Looking for test storage... 00:14:17.102 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:17.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:17.102 --rc genhtml_branch_coverage=1 00:14:17.102 --rc genhtml_function_coverage=1 00:14:17.102 --rc genhtml_legend=1 00:14:17.102 --rc geninfo_all_blocks=1 00:14:17.102 --rc geninfo_unexecuted_blocks=1 00:14:17.102 00:14:17.102 ' 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:17.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:17.102 --rc genhtml_branch_coverage=1 00:14:17.102 --rc genhtml_function_coverage=1 00:14:17.102 --rc genhtml_legend=1 00:14:17.102 --rc geninfo_all_blocks=1 00:14:17.102 --rc geninfo_unexecuted_blocks=1 00:14:17.102 00:14:17.102 ' 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:17.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:17.102 --rc genhtml_branch_coverage=1 00:14:17.102 --rc genhtml_function_coverage=1 00:14:17.102 --rc genhtml_legend=1 00:14:17.102 --rc geninfo_all_blocks=1 00:14:17.102 --rc geninfo_unexecuted_blocks=1 00:14:17.102 00:14:17.102 ' 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:17.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:17.102 --rc genhtml_branch_coverage=1 00:14:17.102 --rc genhtml_function_coverage=1 00:14:17.102 --rc genhtml_legend=1 00:14:17.102 --rc geninfo_all_blocks=1 00:14:17.102 --rc geninfo_unexecuted_blocks=1 00:14:17.102 00:14:17.102 ' 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=cfa2def7-c8af-457f-82a0-b312efdea7f4 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:17.102 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:17.102 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:17.103 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:17.103 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:17.103 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:17.103 Cannot find device "nvmf_init_br" 00:14:17.103 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 00:14:17.103 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:17.103 Cannot find device "nvmf_init_br2" 00:14:17.103 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 00:14:17.103 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:17.103 Cannot find device "nvmf_tgt_br" 00:14:17.103 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 00:14:17.103 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:17.103 Cannot find device "nvmf_tgt_br2" 00:14:17.103 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 00:14:17.103 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:17.103 Cannot find device "nvmf_init_br" 00:14:17.103 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 00:14:17.103 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:17.103 Cannot find device "nvmf_init_br2" 00:14:17.103 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 00:14:17.103 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:17.103 Cannot find device "nvmf_tgt_br" 00:14:17.103 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 00:14:17.103 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:17.103 Cannot find device "nvmf_tgt_br2" 00:14:17.103 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 00:14:17.103 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:17.361 Cannot find device "nvmf_br" 00:14:17.361 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 00:14:17.361 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:17.361 Cannot find device "nvmf_init_if" 00:14:17.361 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true 00:14:17.361 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:17.361 Cannot find device "nvmf_init_if2" 00:14:17.361 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true 00:14:17.361 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:17.361 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:17.361 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true 00:14:17.361 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:17.361 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:17.361 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true 00:14:17.361 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:17.361 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:17.361 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:17.361 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:17.361 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:17.361 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:17.361 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:17.361 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:17.361 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:17.361 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:17.361 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:17.361 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:17.361 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:17.361 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:17.361 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:17.361 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:17.361 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:17.361 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:17.361 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:17.361 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:17.361 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:17.361 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:17.361 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:17.361 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:17.361 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:17.361 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:17.361 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:17.361 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:17.361 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:17.361 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:17.361 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:17.361 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:17.361 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:17.361 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:17.361 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:14:17.361 00:14:17.361 --- 10.0.0.3 ping statistics --- 00:14:17.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:17.361 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:14:17.361 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:17.361 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:17.361 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.049 ms 00:14:17.361 00:14:17.361 --- 10.0.0.4 ping statistics --- 00:14:17.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:17.361 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:14:17.361 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:17.619 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:17.619 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.050 ms 00:14:17.619 00:14:17.620 --- 10.0.0.1 ping statistics --- 00:14:17.620 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:17.620 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:14:17.620 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:17.620 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:17.620 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:14:17.620 00:14:17.620 --- 10.0.0.2 ping statistics --- 00:14:17.620 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:17.620 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:14:17.620 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:17.620 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@461 -- # return 0 00:14:17.620 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:17.620 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:17.620 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:17.620 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:17.620 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:17.620 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:17.620 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:17.620 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:14:17.620 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:17.620 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:17.620 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:17.620 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=73396 00:14:17.620 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:14:17.620 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 73396 00:14:17.620 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 73396 ']' 00:14:17.620 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:17.620 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:17.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:17.620 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:17.620 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:17.620 13:53:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:17.620 [2024-12-06 13:53:16.844190] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:14:17.620 [2024-12-06 13:53:16.844250] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:17.620 [2024-12-06 13:53:16.983838] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:17.878 [2024-12-06 13:53:17.029790] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:17.878 [2024-12-06 13:53:17.029840] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:17.878 [2024-12-06 13:53:17.029849] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:17.878 [2024-12-06 13:53:17.029856] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:17.878 [2024-12-06 13:53:17.029862] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:17.878 [2024-12-06 13:53:17.030179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:17.878 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:17.878 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:14:17.878 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:17.878 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:17.878 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:17.878 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:17.878 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:14:17.878 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:14:17.878 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:14:17.878 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.878 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:17.878 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.879 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:14:17.879 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.879 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:17.879 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.879 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:14:17.879 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.879 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:17.879 [2024-12-06 13:53:17.214943] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:17.879 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.879 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:14:17.879 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.879 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:17.879 Malloc0 00:14:17.879 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.879 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:14:17.879 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.879 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:17.879 [2024-12-06 13:53:17.278485] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:18.137 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.137 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:14:18.137 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.137 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:18.137 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.137 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:14:18.137 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.137 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:18.137 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.137 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:14:18.137 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.137 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:18.137 [2024-12-06 13:53:17.302583] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:18.137 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.137 13:53:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:14:18.137 [2024-12-06 13:53:17.494225] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:19.513 Initializing NVMe Controllers 00:14:19.513 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:14:19.513 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:14:19.513 Initialization complete. Launching workers. 00:14:19.513 ======================================================== 00:14:19.513 Latency(us) 00:14:19.513 Device Information : IOPS MiB/s Average min max 00:14:19.513 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 503.03 62.88 7952.17 5997.54 10031.86 00:14:19.513 ======================================================== 00:14:19.513 Total : 503.03 62.88 7952.17 5997.54 10031.86 00:14:19.513 00:14:19.513 13:53:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:14:19.513 13:53:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.513 13:53:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:19.513 13:53:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:14:19.513 13:53:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.513 13:53:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=4788 00:14:19.513 13:53:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 4788 -eq 0 ]] 00:14:19.513 13:53:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:14:19.513 13:53:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:14:19.513 13:53:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:19.513 13:53:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:14:19.513 13:53:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:19.513 13:53:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:14:19.513 13:53:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:19.513 13:53:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:19.513 rmmod nvme_tcp 00:14:19.513 rmmod nvme_fabrics 00:14:19.513 rmmod nvme_keyring 00:14:19.772 13:53:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:19.772 13:53:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:14:19.772 13:53:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:14:19.772 13:53:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 73396 ']' 00:14:19.772 13:53:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 73396 00:14:19.772 13:53:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 73396 ']' 00:14:19.772 13:53:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 73396 00:14:19.772 13:53:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:14:19.772 13:53:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:19.772 13:53:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73396 00:14:19.772 13:53:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:19.772 13:53:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:19.772 killing process with pid 73396 00:14:19.772 13:53:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73396' 00:14:19.772 13:53:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 73396 00:14:19.772 13:53:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 73396 00:14:19.772 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:19.772 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:19.772 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:19.772 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:14:19.772 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:14:19.772 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:19.772 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:14:19.772 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:19.772 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:19.772 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:19.773 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:19.773 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:20.031 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:20.031 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:20.031 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:20.031 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:20.031 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:20.031 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:20.031 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:20.031 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:20.031 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:20.031 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:20.031 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:20.031 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:20.031 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:20.031 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:20.031 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0 00:14:20.031 00:14:20.031 real 0m3.207s 00:14:20.031 user 0m2.591s 00:14:20.031 sys 0m0.748s 00:14:20.031 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:20.031 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:20.031 ************************************ 00:14:20.031 END TEST nvmf_wait_for_buf 00:14:20.031 ************************************ 00:14:20.031 13:53:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:14:20.031 13:53:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 00:14:20.031 13:53:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:14:20.031 13:53:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:20.031 13:53:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:20.031 13:53:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:20.031 ************************************ 00:14:20.031 START TEST nvmf_nsid 00:14:20.031 ************************************ 00:14:20.031 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:14:20.290 * Looking for test storage... 00:14:20.290 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:20.290 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:20.290 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:14:20.290 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:20.290 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:20.290 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:20.290 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:20.290 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:20.290 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:14:20.290 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:14:20.290 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:14:20.290 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:14:20.290 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:14:20.290 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:14:20.290 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:14:20.290 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:20.290 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:14:20.290 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:14:20.290 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:20.290 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:20.290 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:14:20.290 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:14:20.290 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:20.290 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:14:20.290 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:14:20.290 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:14:20.290 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:14:20.290 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:20.290 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:14:20.290 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:14:20.290 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:20.290 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:20.290 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:14:20.290 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:20.290 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:20.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:20.290 --rc genhtml_branch_coverage=1 00:14:20.290 --rc genhtml_function_coverage=1 00:14:20.290 --rc genhtml_legend=1 00:14:20.290 --rc geninfo_all_blocks=1 00:14:20.290 --rc geninfo_unexecuted_blocks=1 00:14:20.290 00:14:20.290 ' 00:14:20.290 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:20.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:20.290 --rc genhtml_branch_coverage=1 00:14:20.290 --rc genhtml_function_coverage=1 00:14:20.290 --rc genhtml_legend=1 00:14:20.290 --rc geninfo_all_blocks=1 00:14:20.290 --rc geninfo_unexecuted_blocks=1 00:14:20.290 00:14:20.290 ' 00:14:20.290 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:20.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:20.290 --rc genhtml_branch_coverage=1 00:14:20.290 --rc genhtml_function_coverage=1 00:14:20.290 --rc genhtml_legend=1 00:14:20.290 --rc geninfo_all_blocks=1 00:14:20.290 --rc geninfo_unexecuted_blocks=1 00:14:20.290 00:14:20.290 ' 00:14:20.290 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:20.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:20.290 --rc genhtml_branch_coverage=1 00:14:20.290 --rc genhtml_function_coverage=1 00:14:20.290 --rc genhtml_legend=1 00:14:20.290 --rc geninfo_all_blocks=1 00:14:20.290 --rc geninfo_unexecuted_blocks=1 00:14:20.290 00:14:20.290 ' 00:14:20.290 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:20.290 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:14:20.290 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:20.290 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:20.290 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:20.290 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:20.291 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:20.291 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:20.291 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:20.291 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:20.291 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:20.291 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:20.291 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:14:20.291 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=cfa2def7-c8af-457f-82a0-b312efdea7f4 00:14:20.291 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:20.291 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:20.291 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:20.291 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:20.291 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:20.291 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:14:20.291 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:20.291 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:20.291 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:20.291 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.291 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.291 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.291 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:14:20.291 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.291 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:14:20.291 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:20.291 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:20.291 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:20.291 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:20.291 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:20.291 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:20.291 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:20.291 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:20.291 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:20.291 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:20.291 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:14:20.291 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:14:20.291 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:14:20.291 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:14:20.291 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:14:20.291 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:14:20.291 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:20.291 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:20.291 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:20.291 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:20.291 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:20.291 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:20.291 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:20.291 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:20.291 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:20.291 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:20.291 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:20.291 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:20.291 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:20.291 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:20.291 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:20.291 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:20.291 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:20.291 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:20.291 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:20.291 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:20.291 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:20.291 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:20.291 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:20.291 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:20.291 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:20.291 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:20.291 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:20.291 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:20.291 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:20.291 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:20.291 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:20.291 Cannot find device "nvmf_init_br" 00:14:20.291 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # true 00:14:20.291 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:20.291 Cannot find device "nvmf_init_br2" 00:14:20.291 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # true 00:14:20.291 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:20.291 Cannot find device "nvmf_tgt_br" 00:14:20.291 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # true 00:14:20.291 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:20.291 Cannot find device "nvmf_tgt_br2" 00:14:20.291 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # true 00:14:20.291 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:20.291 Cannot find device "nvmf_init_br" 00:14:20.291 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # true 00:14:20.291 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:20.549 Cannot find device "nvmf_init_br2" 00:14:20.549 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # true 00:14:20.549 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:20.549 Cannot find device "nvmf_tgt_br" 00:14:20.549 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # true 00:14:20.549 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:20.549 Cannot find device "nvmf_tgt_br2" 00:14:20.549 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # true 00:14:20.549 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:20.549 Cannot find device "nvmf_br" 00:14:20.549 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # true 00:14:20.549 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:20.549 Cannot find device "nvmf_init_if" 00:14:20.549 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # true 00:14:20.549 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:20.549 Cannot find device "nvmf_init_if2" 00:14:20.549 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # true 00:14:20.549 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:20.549 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:20.550 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # true 00:14:20.550 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:20.550 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:20.550 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # true 00:14:20.550 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:20.550 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:20.550 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:20.550 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:20.550 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:20.550 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:20.550 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:20.550 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:20.550 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:20.550 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:20.550 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:20.550 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:20.550 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:20.550 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:20.550 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:20.550 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:20.550 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:20.550 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:20.550 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:20.550 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:20.550 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:20.550 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:20.550 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:20.807 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:20.807 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:20.807 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:20.807 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:20.807 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:20.807 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:20.807 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:20.807 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:20.807 13:53:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:20.807 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:20.807 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:20.807 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:14:20.807 00:14:20.807 --- 10.0.0.3 ping statistics --- 00:14:20.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:20.807 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:14:20.807 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:20.807 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:20.807 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.090 ms 00:14:20.807 00:14:20.807 --- 10.0.0.4 ping statistics --- 00:14:20.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:20.807 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:14:20.807 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:20.807 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:20.807 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:14:20.807 00:14:20.807 --- 10.0.0.1 ping statistics --- 00:14:20.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:20.807 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:14:20.807 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:20.807 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:20.807 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:14:20.807 00:14:20.807 --- 10.0.0.2 ping statistics --- 00:14:20.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:20.807 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:14:20.807 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:20.807 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@461 -- # return 0 00:14:20.807 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:20.807 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:20.807 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:20.807 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:20.807 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:20.807 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:20.807 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:20.807 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:14:20.807 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:20.807 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:20.807 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:14:20.807 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=73653 00:14:20.807 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:14:20.807 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 73653 00:14:20.807 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 73653 ']' 00:14:20.807 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:20.807 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:20.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:20.807 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:20.807 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:20.807 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:14:20.807 [2024-12-06 13:53:20.113189] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:14:20.807 [2024-12-06 13:53:20.113291] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:21.065 [2024-12-06 13:53:20.260228] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:21.065 [2024-12-06 13:53:20.299191] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:21.065 [2024-12-06 13:53:20.299238] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:21.065 [2024-12-06 13:53:20.299247] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:21.065 [2024-12-06 13:53:20.299254] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:21.065 [2024-12-06 13:53:20.299260] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:21.065 [2024-12-06 13:53:20.299634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:21.065 [2024-12-06 13:53:20.349532] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:21.065 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:21.065 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:14:21.065 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:21.065 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:21.065 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:14:21.065 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:21.065 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:14:21.065 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=73679 00:14:21.322 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:14:21.322 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.3 00:14:21.322 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:14:21.322 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:14:21.322 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:14:21.322 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:14:21.322 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:14:21.322 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:14:21.322 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:14:21.322 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:14:21.322 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:14:21.322 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:14:21.322 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:14:21.322 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:14:21.322 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:14:21.322 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=690c2ba2-1940-4375-9444-d0d1ebf22f4e 00:14:21.322 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:14:21.322 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=b05d4752-416b-4018-acfb-d1fb61ff2907 00:14:21.322 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:14:21.322 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=c485693d-d267-4e9f-ad7b-83ba32302bea 00:14:21.322 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:14:21.322 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.322 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:14:21.322 null0 00:14:21.322 null1 00:14:21.322 null2 00:14:21.322 [2024-12-06 13:53:20.520879] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:21.322 [2024-12-06 13:53:20.538995] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:14:21.322 [2024-12-06 13:53:20.539131] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73679 ] 00:14:21.322 [2024-12-06 13:53:20.544987] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:21.322 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.322 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 73679 /var/tmp/tgt2.sock 00:14:21.322 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 73679 ']' 00:14:21.322 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:14:21.322 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:21.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:14:21.322 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:14:21.322 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:21.322 13:53:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:14:21.322 [2024-12-06 13:53:20.692727] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:21.579 [2024-12-06 13:53:20.753647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:21.579 [2024-12-06 13:53:20.847352] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:21.835 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:21.835 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:14:21.835 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:14:22.400 [2024-12-06 13:53:21.508414] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:22.400 [2024-12-06 13:53:21.524579] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:14:22.400 nvme0n1 nvme0n2 00:14:22.400 nvme1n1 00:14:22.400 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:14:22.400 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:14:22.400 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --hostid=cfa2def7-c8af-457f-82a0-b312efdea7f4 00:14:22.400 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:14:22.400 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:14:22.400 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:14:22.400 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:14:22.400 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:14:22.400 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:14:22.400 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:14:22.400 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:14:22.400 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:14:22.400 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:14:22.400 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:14:22.400 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:14:22.400 13:53:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:14:23.333 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:14:23.333 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:14:23.591 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:14:23.591 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:14:23.591 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:14:23.591 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 690c2ba2-1940-4375-9444-d0d1ebf22f4e 00:14:23.591 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:14:23.591 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:14:23.591 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:14:23.591 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:14:23.591 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:14:23.591 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=690c2ba2194043759444d0d1ebf22f4e 00:14:23.591 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 690C2BA2194043759444D0D1EBF22F4E 00:14:23.591 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 690C2BA2194043759444D0D1EBF22F4E == \6\9\0\C\2\B\A\2\1\9\4\0\4\3\7\5\9\4\4\4\D\0\D\1\E\B\F\2\2\F\4\E ]] 00:14:23.591 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:14:23.591 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:14:23.591 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:14:23.591 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:14:23.591 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:14:23.591 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:14:23.591 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:14:23.591 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid b05d4752-416b-4018-acfb-d1fb61ff2907 00:14:23.591 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:14:23.591 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:14:23.591 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:14:23.591 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:14:23.591 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:14:23.591 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=b05d4752416b4018acfbd1fb61ff2907 00:14:23.591 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo B05D4752416B4018ACFBD1FB61FF2907 00:14:23.591 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ B05D4752416B4018ACFBD1FB61FF2907 == \B\0\5\D\4\7\5\2\4\1\6\B\4\0\1\8\A\C\F\B\D\1\F\B\6\1\F\F\2\9\0\7 ]] 00:14:23.591 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:14:23.591 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:14:23.591 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:14:23.591 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:14:23.591 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:14:23.591 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:14:23.591 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:14:23.591 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid c485693d-d267-4e9f-ad7b-83ba32302bea 00:14:23.591 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:14:23.591 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:14:23.591 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:14:23.591 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:14:23.591 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:14:23.591 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=c485693dd2674e9fad7b83ba32302bea 00:14:23.591 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo C485693DD2674E9FAD7B83BA32302BEA 00:14:23.591 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ C485693DD2674E9FAD7B83BA32302BEA == \C\4\8\5\6\9\3\D\D\2\6\7\4\E\9\F\A\D\7\B\8\3\B\A\3\2\3\0\2\B\E\A ]] 00:14:23.591 13:53:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:14:23.849 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:14:23.849 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:14:23.849 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 73679 00:14:23.849 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 73679 ']' 00:14:23.849 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 73679 00:14:23.849 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:14:23.849 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:23.849 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73679 00:14:23.849 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:23.849 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:23.849 killing process with pid 73679 00:14:23.850 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73679' 00:14:23.850 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 73679 00:14:23.850 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 73679 00:14:24.417 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:14:24.417 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:24.417 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:14:24.417 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:24.417 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:14:24.417 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:24.417 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:24.417 rmmod nvme_tcp 00:14:24.417 rmmod nvme_fabrics 00:14:24.417 rmmod nvme_keyring 00:14:24.417 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:24.417 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:14:24.417 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:14:24.417 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 73653 ']' 00:14:24.417 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 73653 00:14:24.417 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 73653 ']' 00:14:24.417 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 73653 00:14:24.417 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:14:24.417 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:24.417 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73653 00:14:24.675 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:24.675 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:24.675 killing process with pid 73653 00:14:24.675 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73653' 00:14:24.675 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 73653 00:14:24.675 13:53:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 73653 00:14:24.675 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:24.675 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:24.675 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:24.675 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:14:24.675 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:14:24.675 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:24.675 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:14:24.675 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:24.675 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:24.675 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:24.675 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:24.675 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:24.675 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:24.932 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:24.932 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:24.932 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:24.932 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:24.932 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:24.932 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:24.932 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:24.932 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:24.932 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:24.932 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:24.932 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:24.932 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:24.932 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:24.932 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@300 -- # return 0 00:14:24.932 00:14:24.932 real 0m4.827s 00:14:24.932 user 0m7.157s 00:14:24.932 sys 0m1.808s 00:14:24.932 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:24.932 13:53:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:14:24.932 ************************************ 00:14:24.932 END TEST nvmf_nsid 00:14:24.932 ************************************ 00:14:24.932 13:53:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:14:24.932 00:14:24.932 real 4m55.835s 00:14:24.932 user 10m13.555s 00:14:24.932 sys 1m7.911s 00:14:24.932 13:53:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:24.932 13:53:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:24.932 ************************************ 00:14:24.932 END TEST nvmf_target_extra 00:14:24.932 ************************************ 00:14:25.190 13:53:24 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:14:25.190 13:53:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:25.190 13:53:24 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:25.190 13:53:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:25.190 ************************************ 00:14:25.190 START TEST nvmf_host 00:14:25.190 ************************************ 00:14:25.190 13:53:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:14:25.190 * Looking for test storage... 00:14:25.190 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:14:25.190 13:53:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:25.190 13:53:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:14:25.190 13:53:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:25.190 13:53:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:25.190 13:53:24 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:25.190 13:53:24 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:25.190 13:53:24 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:25.190 13:53:24 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:14:25.190 13:53:24 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:14:25.190 13:53:24 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:14:25.190 13:53:24 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:14:25.190 13:53:24 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:14:25.190 13:53:24 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:14:25.190 13:53:24 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:14:25.190 13:53:24 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:25.190 13:53:24 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:14:25.190 13:53:24 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:14:25.190 13:53:24 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:25.190 13:53:24 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:25.190 13:53:24 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:14:25.190 13:53:24 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:14:25.190 13:53:24 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:25.190 13:53:24 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:14:25.190 13:53:24 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:14:25.190 13:53:24 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:14:25.190 13:53:24 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:14:25.190 13:53:24 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:25.190 13:53:24 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:14:25.190 13:53:24 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:14:25.190 13:53:24 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:25.190 13:53:24 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:25.190 13:53:24 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:14:25.190 13:53:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:25.190 13:53:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:25.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:25.190 --rc genhtml_branch_coverage=1 00:14:25.190 --rc genhtml_function_coverage=1 00:14:25.190 --rc genhtml_legend=1 00:14:25.190 --rc geninfo_all_blocks=1 00:14:25.190 --rc geninfo_unexecuted_blocks=1 00:14:25.190 00:14:25.190 ' 00:14:25.190 13:53:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:25.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:25.190 --rc genhtml_branch_coverage=1 00:14:25.190 --rc genhtml_function_coverage=1 00:14:25.190 --rc genhtml_legend=1 00:14:25.190 --rc geninfo_all_blocks=1 00:14:25.190 --rc geninfo_unexecuted_blocks=1 00:14:25.190 00:14:25.190 ' 00:14:25.190 13:53:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:25.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:25.190 --rc genhtml_branch_coverage=1 00:14:25.190 --rc genhtml_function_coverage=1 00:14:25.190 --rc genhtml_legend=1 00:14:25.190 --rc geninfo_all_blocks=1 00:14:25.190 --rc geninfo_unexecuted_blocks=1 00:14:25.190 00:14:25.190 ' 00:14:25.190 13:53:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:25.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:25.190 --rc genhtml_branch_coverage=1 00:14:25.190 --rc genhtml_function_coverage=1 00:14:25.190 --rc genhtml_legend=1 00:14:25.190 --rc geninfo_all_blocks=1 00:14:25.190 --rc geninfo_unexecuted_blocks=1 00:14:25.190 00:14:25.190 ' 00:14:25.190 13:53:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:25.190 13:53:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:14:25.190 13:53:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:25.190 13:53:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:25.190 13:53:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:25.190 13:53:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:25.190 13:53:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:25.190 13:53:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:25.190 13:53:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:25.190 13:53:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:25.190 13:53:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:25.190 13:53:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:25.190 13:53:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:14:25.190 13:53:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=cfa2def7-c8af-457f-82a0-b312efdea7f4 00:14:25.190 13:53:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:25.190 13:53:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:25.190 13:53:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:25.190 13:53:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:25.190 13:53:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:25.190 13:53:24 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:14:25.190 13:53:24 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:25.190 13:53:24 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:25.190 13:53:24 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:25.190 13:53:24 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.190 13:53:24 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.190 13:53:24 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.190 13:53:24 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:14:25.190 13:53:24 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.190 13:53:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:14:25.190 13:53:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:25.190 13:53:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:25.190 13:53:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:25.190 13:53:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:25.190 13:53:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:25.190 13:53:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:25.190 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:25.190 13:53:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:25.190 13:53:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:25.190 13:53:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:25.190 13:53:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:14:25.190 13:53:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:14:25.190 13:53:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 1 -eq 0 ]] 00:14:25.190 13:53:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:14:25.190 13:53:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:25.190 13:53:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:25.190 13:53:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:14:25.190 ************************************ 00:14:25.190 START TEST nvmf_identify 00:14:25.190 ************************************ 00:14:25.190 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:14:25.447 * Looking for test storage... 00:14:25.447 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:25.447 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:25.447 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:14:25.447 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:25.447 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:25.447 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:25.447 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:25.447 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:25.447 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:14:25.447 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:14:25.447 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:14:25.447 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:14:25.447 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:14:25.447 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:14:25.447 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:14:25.447 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:25.447 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:14:25.447 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:14:25.447 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:25.447 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:25.447 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:14:25.447 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:14:25.447 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:25.447 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:14:25.447 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:14:25.447 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:14:25.447 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:14:25.447 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:25.447 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:14:25.447 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:14:25.447 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:25.447 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:25.447 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:14:25.447 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:25.447 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:25.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:25.447 --rc genhtml_branch_coverage=1 00:14:25.447 --rc genhtml_function_coverage=1 00:14:25.447 --rc genhtml_legend=1 00:14:25.447 --rc geninfo_all_blocks=1 00:14:25.447 --rc geninfo_unexecuted_blocks=1 00:14:25.447 00:14:25.447 ' 00:14:25.447 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:25.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:25.447 --rc genhtml_branch_coverage=1 00:14:25.447 --rc genhtml_function_coverage=1 00:14:25.447 --rc genhtml_legend=1 00:14:25.447 --rc geninfo_all_blocks=1 00:14:25.447 --rc geninfo_unexecuted_blocks=1 00:14:25.447 00:14:25.447 ' 00:14:25.447 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:25.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:25.447 --rc genhtml_branch_coverage=1 00:14:25.447 --rc genhtml_function_coverage=1 00:14:25.447 --rc genhtml_legend=1 00:14:25.447 --rc geninfo_all_blocks=1 00:14:25.447 --rc geninfo_unexecuted_blocks=1 00:14:25.447 00:14:25.447 ' 00:14:25.447 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:25.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:25.447 --rc genhtml_branch_coverage=1 00:14:25.447 --rc genhtml_function_coverage=1 00:14:25.447 --rc genhtml_legend=1 00:14:25.447 --rc geninfo_all_blocks=1 00:14:25.447 --rc geninfo_unexecuted_blocks=1 00:14:25.447 00:14:25.447 ' 00:14:25.447 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:25.447 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:14:25.447 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:25.447 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:25.447 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:25.447 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:25.447 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:25.447 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:25.447 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:25.447 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:25.447 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:25.447 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:25.447 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:14:25.447 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=cfa2def7-c8af-457f-82a0-b312efdea7f4 00:14:25.447 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:25.447 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:25.447 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:25.447 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:25.447 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:25.447 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:14:25.447 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:25.447 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:25.447 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:25.447 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.447 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.447 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.447 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:14:25.447 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.447 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:14:25.447 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:25.447 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:25.447 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:25.447 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:25.447 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:25.447 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:25.447 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:25.447 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:25.447 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:25.447 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:25.447 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:25.447 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:25.447 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:14:25.448 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:25.448 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:25.448 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:25.448 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:25.448 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:25.448 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:25.448 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:25.448 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:25.448 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:25.448 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:25.448 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:25.448 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:25.448 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:25.448 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:25.448 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:25.448 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:25.448 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:25.448 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:25.448 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:25.448 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:25.448 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:25.448 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:25.448 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:25.448 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:25.448 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:25.448 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:25.448 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:25.448 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:25.448 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:25.448 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:25.448 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:25.448 Cannot find device "nvmf_init_br" 00:14:25.448 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:14:25.448 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:25.448 Cannot find device "nvmf_init_br2" 00:14:25.448 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:14:25.448 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:25.448 Cannot find device "nvmf_tgt_br" 00:14:25.448 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 00:14:25.448 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:25.448 Cannot find device "nvmf_tgt_br2" 00:14:25.448 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 00:14:25.448 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:25.448 Cannot find device "nvmf_init_br" 00:14:25.448 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 00:14:25.448 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:25.706 Cannot find device "nvmf_init_br2" 00:14:25.706 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 00:14:25.706 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:25.706 Cannot find device "nvmf_tgt_br" 00:14:25.706 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 00:14:25.706 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:25.706 Cannot find device "nvmf_tgt_br2" 00:14:25.706 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 00:14:25.706 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:25.706 Cannot find device "nvmf_br" 00:14:25.706 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 00:14:25.706 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:25.706 Cannot find device "nvmf_init_if" 00:14:25.706 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true 00:14:25.706 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:25.706 Cannot find device "nvmf_init_if2" 00:14:25.706 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true 00:14:25.706 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:25.706 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:25.706 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true 00:14:25.706 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:25.706 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:25.706 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true 00:14:25.706 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:25.706 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:25.706 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:25.706 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:25.706 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:25.706 13:53:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:25.706 13:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:25.706 13:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:25.706 13:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:25.706 13:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:25.706 13:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:25.706 13:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:25.706 13:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:25.706 13:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:25.706 13:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:25.706 13:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:25.706 13:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:25.706 13:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:25.706 13:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:25.706 13:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:25.974 13:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:25.974 13:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:25.974 13:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:25.975 13:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:25.975 13:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:25.975 13:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:25.975 13:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:25.975 13:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:25.975 13:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:25.975 13:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:25.975 13:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:25.975 13:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:25.975 13:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:25.975 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:25.975 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.102 ms 00:14:25.975 00:14:25.975 --- 10.0.0.3 ping statistics --- 00:14:25.975 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:25.975 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:14:25.975 13:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:25.975 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:25.975 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.100 ms 00:14:25.975 00:14:25.975 --- 10.0.0.4 ping statistics --- 00:14:25.975 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:25.975 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:14:25.975 13:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:25.975 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:25.975 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:14:25.975 00:14:25.975 --- 10.0.0.1 ping statistics --- 00:14:25.975 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:25.975 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:14:25.975 13:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:25.975 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:25.975 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:14:25.975 00:14:25.975 --- 10.0.0.2 ping statistics --- 00:14:25.975 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:25.976 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:14:25.976 13:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:25.976 13:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@461 -- # return 0 00:14:25.976 13:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:25.976 13:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:25.976 13:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:25.976 13:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:25.976 13:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:25.976 13:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:25.976 13:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:25.976 13:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:14:25.976 13:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:25.976 13:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:25.976 13:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=74030 00:14:25.976 13:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:25.976 13:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 74030 00:14:25.976 13:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:25.976 13:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 74030 ']' 00:14:25.976 13:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:25.976 13:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:25.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:25.976 13:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:25.976 13:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:25.976 13:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:25.976 [2024-12-06 13:53:25.295958] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:14:25.976 [2024-12-06 13:53:25.296045] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:26.233 [2024-12-06 13:53:25.449513] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:26.233 [2024-12-06 13:53:25.506231] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:26.233 [2024-12-06 13:53:25.506284] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:26.233 [2024-12-06 13:53:25.506303] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:26.233 [2024-12-06 13:53:25.506313] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:26.233 [2024-12-06 13:53:25.506323] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:26.233 [2024-12-06 13:53:25.507614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:26.233 [2024-12-06 13:53:25.507773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:26.233 [2024-12-06 13:53:25.507874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:26.233 [2024-12-06 13:53:25.507875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:26.233 [2024-12-06 13:53:25.566989] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:26.490 13:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:26.490 13:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:14:26.490 13:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:26.490 13:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.490 13:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:26.490 [2024-12-06 13:53:25.645377] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:26.490 13:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.490 13:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:14:26.490 13:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:26.490 13:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:26.490 13:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:26.490 13:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.490 13:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:26.490 Malloc0 00:14:26.490 13:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.490 13:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:26.490 13:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.490 13:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:26.490 13:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.490 13:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:14:26.490 13:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.490 13:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:26.490 13:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.490 13:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:26.490 13:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.490 13:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:26.490 [2024-12-06 13:53:25.756908] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:26.490 13:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.490 13:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:14:26.490 13:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.490 13:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:26.490 13:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.490 13:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:14:26.490 13:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.490 13:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:26.490 [ 00:14:26.490 { 00:14:26.490 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:26.490 "subtype": "Discovery", 00:14:26.490 "listen_addresses": [ 00:14:26.490 { 00:14:26.490 "trtype": "TCP", 00:14:26.490 "adrfam": "IPv4", 00:14:26.490 "traddr": "10.0.0.3", 00:14:26.490 "trsvcid": "4420" 00:14:26.490 } 00:14:26.490 ], 00:14:26.490 "allow_any_host": true, 00:14:26.490 "hosts": [] 00:14:26.490 }, 00:14:26.490 { 00:14:26.490 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:26.490 "subtype": "NVMe", 00:14:26.490 "listen_addresses": [ 00:14:26.490 { 00:14:26.490 "trtype": "TCP", 00:14:26.490 "adrfam": "IPv4", 00:14:26.490 "traddr": "10.0.0.3", 00:14:26.490 "trsvcid": "4420" 00:14:26.490 } 00:14:26.490 ], 00:14:26.490 "allow_any_host": true, 00:14:26.490 "hosts": [], 00:14:26.490 "serial_number": "SPDK00000000000001", 00:14:26.490 "model_number": "SPDK bdev Controller", 00:14:26.490 "max_namespaces": 32, 00:14:26.490 "min_cntlid": 1, 00:14:26.490 "max_cntlid": 65519, 00:14:26.490 "namespaces": [ 00:14:26.490 { 00:14:26.490 "nsid": 1, 00:14:26.490 "bdev_name": "Malloc0", 00:14:26.490 "name": "Malloc0", 00:14:26.490 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:14:26.490 "eui64": "ABCDEF0123456789", 00:14:26.490 "uuid": "c00c470d-d70f-4d42-9c9d-db6d7a9cdd93" 00:14:26.490 } 00:14:26.490 ] 00:14:26.490 } 00:14:26.490 ] 00:14:26.490 13:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.490 13:53:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:14:26.490 [2024-12-06 13:53:25.810175] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:14:26.490 [2024-12-06 13:53:25.810229] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74062 ] 00:14:26.749 [2024-12-06 13:53:25.962738] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:14:26.749 [2024-12-06 13:53:25.962805] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:14:26.749 [2024-12-06 13:53:25.962811] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:14:26.749 [2024-12-06 13:53:25.962823] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:14:26.749 [2024-12-06 13:53:25.962834] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:14:26.749 [2024-12-06 13:53:25.967159] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:14:26.749 [2024-12-06 13:53:25.967229] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1489750 0 00:14:26.749 [2024-12-06 13:53:25.975123] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:14:26.749 [2024-12-06 13:53:25.975145] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:14:26.749 [2024-12-06 13:53:25.975162] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:14:26.749 [2024-12-06 13:53:25.975165] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:14:26.749 [2024-12-06 13:53:25.975197] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:26.749 [2024-12-06 13:53:25.975205] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:26.749 [2024-12-06 13:53:25.975209] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1489750) 00:14:26.749 [2024-12-06 13:53:25.975221] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:14:26.749 [2024-12-06 13:53:25.975251] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14ed740, cid 0, qid 0 00:14:26.749 [2024-12-06 13:53:25.983120] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:26.750 [2024-12-06 13:53:25.983139] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:26.750 [2024-12-06 13:53:25.983144] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:26.750 [2024-12-06 13:53:25.983159] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14ed740) on tqpair=0x1489750 00:14:26.750 [2024-12-06 13:53:25.983168] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:14:26.750 [2024-12-06 13:53:25.983176] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:14:26.750 [2024-12-06 13:53:25.983182] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:14:26.750 [2024-12-06 13:53:25.983198] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:26.750 [2024-12-06 13:53:25.983203] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:26.750 [2024-12-06 13:53:25.983207] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1489750) 00:14:26.750 [2024-12-06 13:53:25.983215] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:26.750 [2024-12-06 13:53:25.983241] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14ed740, cid 0, qid 0 00:14:26.750 [2024-12-06 13:53:25.983308] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:26.750 [2024-12-06 13:53:25.983315] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:26.750 [2024-12-06 13:53:25.983318] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:26.750 [2024-12-06 13:53:25.983322] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14ed740) on tqpair=0x1489750 00:14:26.750 [2024-12-06 13:53:25.983328] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:14:26.750 [2024-12-06 13:53:25.983335] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:14:26.750 [2024-12-06 13:53:25.983342] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:26.750 [2024-12-06 13:53:25.983346] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:26.750 [2024-12-06 13:53:25.983349] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1489750) 00:14:26.750 [2024-12-06 13:53:25.983356] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:26.750 [2024-12-06 13:53:25.983374] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14ed740, cid 0, qid 0 00:14:26.750 [2024-12-06 13:53:25.983486] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:26.750 [2024-12-06 13:53:25.983494] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:26.750 [2024-12-06 13:53:25.983497] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:26.750 [2024-12-06 13:53:25.983501] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14ed740) on tqpair=0x1489750 00:14:26.750 [2024-12-06 13:53:25.983507] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:14:26.750 [2024-12-06 13:53:25.983515] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:14:26.750 [2024-12-06 13:53:25.983524] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:26.750 [2024-12-06 13:53:25.983528] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:26.750 [2024-12-06 13:53:25.983531] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1489750) 00:14:26.750 [2024-12-06 13:53:25.983538] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:26.750 [2024-12-06 13:53:25.983557] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14ed740, cid 0, qid 0 00:14:26.750 [2024-12-06 13:53:25.983620] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:26.750 [2024-12-06 13:53:25.983627] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:26.750 [2024-12-06 13:53:25.983630] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:26.750 [2024-12-06 13:53:25.983634] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14ed740) on tqpair=0x1489750 00:14:26.750 [2024-12-06 13:53:25.983639] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:26.750 [2024-12-06 13:53:25.983649] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:26.750 [2024-12-06 13:53:25.983653] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:26.750 [2024-12-06 13:53:25.983656] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1489750) 00:14:26.750 [2024-12-06 13:53:25.983663] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:26.750 [2024-12-06 13:53:25.983683] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14ed740, cid 0, qid 0 00:14:26.750 [2024-12-06 13:53:25.983735] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:26.750 [2024-12-06 13:53:25.983742] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:26.750 [2024-12-06 13:53:25.983745] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:26.750 [2024-12-06 13:53:25.983749] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14ed740) on tqpair=0x1489750 00:14:26.750 [2024-12-06 13:53:25.983754] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:14:26.750 [2024-12-06 13:53:25.983759] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:14:26.750 [2024-12-06 13:53:25.983781] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:26.750 [2024-12-06 13:53:25.983892] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:14:26.750 [2024-12-06 13:53:25.983898] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:26.750 [2024-12-06 13:53:25.983906] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:26.750 [2024-12-06 13:53:25.983910] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:26.750 [2024-12-06 13:53:25.983914] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1489750) 00:14:26.750 [2024-12-06 13:53:25.983920] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:26.750 [2024-12-06 13:53:25.983939] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14ed740, cid 0, qid 0 00:14:26.750 [2024-12-06 13:53:25.983997] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:26.750 [2024-12-06 13:53:25.984004] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:26.750 [2024-12-06 13:53:25.984007] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:26.750 [2024-12-06 13:53:25.984010] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14ed740) on tqpair=0x1489750 00:14:26.750 [2024-12-06 13:53:25.984015] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:26.750 [2024-12-06 13:53:25.984025] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:26.750 [2024-12-06 13:53:25.984029] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:26.750 [2024-12-06 13:53:25.984032] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1489750) 00:14:26.750 [2024-12-06 13:53:25.984039] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:26.750 [2024-12-06 13:53:25.984055] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14ed740, cid 0, qid 0 00:14:26.750 [2024-12-06 13:53:25.984129] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:26.750 [2024-12-06 13:53:25.984136] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:26.750 [2024-12-06 13:53:25.984139] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:26.750 [2024-12-06 13:53:25.984154] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14ed740) on tqpair=0x1489750 00:14:26.750 [2024-12-06 13:53:25.984160] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:26.750 [2024-12-06 13:53:25.984165] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:14:26.750 [2024-12-06 13:53:25.984173] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:14:26.750 [2024-12-06 13:53:25.984183] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:14:26.750 [2024-12-06 13:53:25.984194] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:26.750 [2024-12-06 13:53:25.984198] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1489750) 00:14:26.750 [2024-12-06 13:53:25.984205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:26.750 [2024-12-06 13:53:25.984225] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14ed740, cid 0, qid 0 00:14:26.750 [2024-12-06 13:53:25.984332] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:26.750 [2024-12-06 13:53:25.984339] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:26.750 [2024-12-06 13:53:25.984342] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:26.750 [2024-12-06 13:53:25.984346] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1489750): datao=0, datal=4096, cccid=0 00:14:26.750 [2024-12-06 13:53:25.984350] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14ed740) on tqpair(0x1489750): expected_datao=0, payload_size=4096 00:14:26.750 [2024-12-06 13:53:25.984355] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:26.750 [2024-12-06 13:53:25.984362] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:26.750 [2024-12-06 13:53:25.984367] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:26.750 [2024-12-06 13:53:25.984375] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:26.750 [2024-12-06 13:53:25.984381] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:26.750 [2024-12-06 13:53:25.984384] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:26.750 [2024-12-06 13:53:25.984388] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14ed740) on tqpair=0x1489750 00:14:26.750 [2024-12-06 13:53:25.984395] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:14:26.750 [2024-12-06 13:53:25.984401] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:14:26.750 [2024-12-06 13:53:25.984405] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:14:26.750 [2024-12-06 13:53:25.984415] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:14:26.750 [2024-12-06 13:53:25.984420] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:14:26.751 [2024-12-06 13:53:25.984425] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:14:26.751 [2024-12-06 13:53:25.984436] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:14:26.751 [2024-12-06 13:53:25.984443] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:26.751 [2024-12-06 13:53:25.984447] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:26.751 [2024-12-06 13:53:25.984451] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1489750) 00:14:26.751 [2024-12-06 13:53:25.984458] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:26.751 [2024-12-06 13:53:25.984477] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14ed740, cid 0, qid 0 00:14:26.751 [2024-12-06 13:53:25.984557] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:26.751 [2024-12-06 13:53:25.984564] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:26.751 [2024-12-06 13:53:25.984567] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:26.751 [2024-12-06 13:53:25.984571] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14ed740) on tqpair=0x1489750 00:14:26.751 [2024-12-06 13:53:25.984578] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:26.751 [2024-12-06 13:53:25.984581] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:26.751 [2024-12-06 13:53:25.984585] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1489750) 00:14:26.751 [2024-12-06 13:53:25.984591] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:26.751 [2024-12-06 13:53:25.984597] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:26.751 [2024-12-06 13:53:25.984600] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:26.751 [2024-12-06 13:53:25.984603] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1489750) 00:14:26.751 [2024-12-06 13:53:25.984609] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:26.751 [2024-12-06 13:53:25.984614] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:26.751 [2024-12-06 13:53:25.984618] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:26.751 [2024-12-06 13:53:25.984621] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1489750) 00:14:26.751 [2024-12-06 13:53:25.984626] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:26.751 [2024-12-06 13:53:25.984631] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:26.751 [2024-12-06 13:53:25.984634] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:26.751 [2024-12-06 13:53:25.984638] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1489750) 00:14:26.751 [2024-12-06 13:53:25.984643] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:26.751 [2024-12-06 13:53:25.984648] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:14:26.751 [2024-12-06 13:53:25.984656] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:26.751 [2024-12-06 13:53:25.984662] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:26.751 [2024-12-06 13:53:25.984666] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1489750) 00:14:26.751 [2024-12-06 13:53:25.984672] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:26.751 [2024-12-06 13:53:25.984693] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14ed740, cid 0, qid 0 00:14:26.751 [2024-12-06 13:53:25.984699] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14ed8c0, cid 1, qid 0 00:14:26.751 [2024-12-06 13:53:25.984704] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14eda40, cid 2, qid 0 00:14:26.751 [2024-12-06 13:53:25.984709] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14edbc0, cid 3, qid 0 00:14:26.751 [2024-12-06 13:53:25.984713] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14edd40, cid 4, qid 0 00:14:26.751 [2024-12-06 13:53:25.984814] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:26.751 [2024-12-06 13:53:25.984821] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:26.751 [2024-12-06 13:53:25.984841] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:26.751 [2024-12-06 13:53:25.984845] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14edd40) on tqpair=0x1489750 00:14:26.751 [2024-12-06 13:53:25.984855] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:14:26.751 [2024-12-06 13:53:25.984861] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:14:26.751 [2024-12-06 13:53:25.984872] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:26.751 [2024-12-06 13:53:25.984877] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1489750) 00:14:26.751 [2024-12-06 13:53:25.984884] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:26.751 [2024-12-06 13:53:25.984903] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14edd40, cid 4, qid 0 00:14:26.751 [2024-12-06 13:53:25.984973] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:26.751 [2024-12-06 13:53:25.984986] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:26.751 [2024-12-06 13:53:25.984990] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:26.751 [2024-12-06 13:53:25.984994] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1489750): datao=0, datal=4096, cccid=4 00:14:26.751 [2024-12-06 13:53:25.984999] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14edd40) on tqpair(0x1489750): expected_datao=0, payload_size=4096 00:14:26.751 [2024-12-06 13:53:25.985003] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:26.751 [2024-12-06 13:53:25.985010] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:26.751 [2024-12-06 13:53:25.985014] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:26.751 [2024-12-06 13:53:25.985022] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:26.751 [2024-12-06 13:53:25.985029] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:26.751 [2024-12-06 13:53:25.985032] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:26.751 [2024-12-06 13:53:25.985036] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14edd40) on tqpair=0x1489750 00:14:26.751 [2024-12-06 13:53:25.985049] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:14:26.751 [2024-12-06 13:53:25.985077] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:26.751 [2024-12-06 13:53:25.985082] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1489750) 00:14:26.751 [2024-12-06 13:53:25.985089] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:26.751 [2024-12-06 13:53:25.985097] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:26.751 [2024-12-06 13:53:25.985114] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:26.751 [2024-12-06 13:53:25.985133] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1489750) 00:14:26.751 [2024-12-06 13:53:25.985140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:14:26.751 [2024-12-06 13:53:25.985182] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14edd40, cid 4, qid 0 00:14:26.751 [2024-12-06 13:53:25.985190] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14edec0, cid 5, qid 0 00:14:26.751 [2024-12-06 13:53:25.985303] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:26.751 [2024-12-06 13:53:25.985310] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:26.751 [2024-12-06 13:53:25.985313] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:26.751 [2024-12-06 13:53:25.985317] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1489750): datao=0, datal=1024, cccid=4 00:14:26.751 [2024-12-06 13:53:25.985322] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14edd40) on tqpair(0x1489750): expected_datao=0, payload_size=1024 00:14:26.751 [2024-12-06 13:53:25.985326] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:26.751 [2024-12-06 13:53:25.985333] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:26.751 [2024-12-06 13:53:25.985337] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:26.751 [2024-12-06 13:53:25.985342] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:26.751 [2024-12-06 13:53:25.985348] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:26.751 [2024-12-06 13:53:25.985352] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:26.751 [2024-12-06 13:53:25.985355] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14edec0) on tqpair=0x1489750 00:14:26.751 [2024-12-06 13:53:25.985374] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:26.751 [2024-12-06 13:53:25.985381] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:26.751 [2024-12-06 13:53:25.985385] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:26.751 [2024-12-06 13:53:25.985389] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14edd40) on tqpair=0x1489750 00:14:26.751 [2024-12-06 13:53:25.985402] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:26.751 [2024-12-06 13:53:25.985406] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1489750) 00:14:26.751 [2024-12-06 13:53:25.985414] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:26.751 [2024-12-06 13:53:25.985438] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14edd40, cid 4, qid 0 00:14:26.751 [2024-12-06 13:53:25.985516] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:26.751 [2024-12-06 13:53:25.985522] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:26.751 [2024-12-06 13:53:25.985526] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:26.751 [2024-12-06 13:53:25.985530] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1489750): datao=0, datal=3072, cccid=4 00:14:26.751 [2024-12-06 13:53:25.985549] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14edd40) on tqpair(0x1489750): expected_datao=0, payload_size=3072 00:14:26.751 [2024-12-06 13:53:25.985553] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:26.751 [2024-12-06 13:53:25.985559] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:26.751 [2024-12-06 13:53:25.985563] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:26.751 [2024-12-06 13:53:25.985574] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:26.751 [2024-12-06 13:53:25.985580] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:26.751 [2024-12-06 13:53:25.985583] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:26.752 [2024-12-06 13:53:25.985587] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14edd40) on tqpair=0x1489750 00:14:26.752 [2024-12-06 13:53:25.985596] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:26.752 [2024-12-06 13:53:25.985600] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1489750) 00:14:26.752 [2024-12-06 13:53:25.985607] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:26.752 [2024-12-06 13:53:25.985629] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14edd40, cid 4, qid 0 00:14:26.752 ===================================================== 00:14:26.752 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 00:14:26.752 ===================================================== 00:14:26.752 Controller Capabilities/Features 00:14:26.752 ================================ 00:14:26.752 Vendor ID: 0000 00:14:26.752 Subsystem Vendor ID: 0000 00:14:26.752 Serial Number: .................... 00:14:26.752 Model Number: ........................................ 00:14:26.752 Firmware Version: 25.01 00:14:26.752 Recommended Arb Burst: 0 00:14:26.752 IEEE OUI Identifier: 00 00 00 00:14:26.752 Multi-path I/O 00:14:26.752 May have multiple subsystem ports: No 00:14:26.752 May have multiple controllers: No 00:14:26.752 Associated with SR-IOV VF: No 00:14:26.752 Max Data Transfer Size: 131072 00:14:26.752 Max Number of Namespaces: 0 00:14:26.752 Max Number of I/O Queues: 1024 00:14:26.752 NVMe Specification Version (VS): 1.3 00:14:26.752 NVMe Specification Version (Identify): 1.3 00:14:26.752 Maximum Queue Entries: 128 00:14:26.752 Contiguous Queues Required: Yes 00:14:26.752 Arbitration Mechanisms Supported 00:14:26.752 Weighted Round Robin: Not Supported 00:14:26.752 Vendor Specific: Not Supported 00:14:26.752 Reset Timeout: 15000 ms 00:14:26.752 Doorbell Stride: 4 bytes 00:14:26.752 NVM Subsystem Reset: Not Supported 00:14:26.752 Command Sets Supported 00:14:26.752 NVM Command Set: Supported 00:14:26.752 Boot Partition: Not Supported 00:14:26.752 Memory Page Size Minimum: 4096 bytes 00:14:26.752 Memory Page Size Maximum: 4096 bytes 00:14:26.752 Persistent Memory Region: Not Supported 00:14:26.752 Optional Asynchronous Events Supported 00:14:26.752 Namespace Attribute Notices: Not Supported 00:14:26.752 Firmware Activation Notices: Not Supported 00:14:26.752 ANA Change Notices: Not Supported 00:14:26.752 PLE Aggregate Log Change Notices: Not Supported 00:14:26.752 LBA Status Info Alert Notices: Not Supported 00:14:26.752 EGE Aggregate Log Change Notices: Not Supported 00:14:26.752 Normal NVM Subsystem Shutdown event: Not Supported 00:14:26.752 Zone Descriptor Change Notices: Not Supported 00:14:26.752 Discovery Log Change Notices: Supported 00:14:26.752 Controller Attributes 00:14:26.752 128-bit Host Identifier: Not Supported 00:14:26.752 Non-Operational Permissive Mode: Not Supported 00:14:26.752 NVM Sets: Not Supported 00:14:26.752 Read Recovery Levels: Not Supported 00:14:26.752 Endurance Groups: Not Supported 00:14:26.752 Predictable Latency Mode: Not Supported 00:14:26.752 Traffic Based Keep ALive: Not Supported 00:14:26.752 Namespace Granularity: Not Supported 00:14:26.752 SQ Associations: Not Supported 00:14:26.752 UUID List: Not Supported 00:14:26.752 Multi-Domain Subsystem: Not Supported 00:14:26.752 Fixed Capacity Management: Not Supported 00:14:26.752 Variable Capacity Management: Not Supported 00:14:26.752 Delete Endurance Group: Not Supported 00:14:26.752 Delete NVM Set: Not Supported 00:14:26.752 Extended LBA Formats Supported: Not Supported 00:14:26.752 Flexible Data Placement Supported: Not Supported 00:14:26.752 00:14:26.752 Controller Memory Buffer Support 00:14:26.752 ================================ 00:14:26.752 Supported: No 00:14:26.752 00:14:26.752 Persistent Memory Region Support 00:14:26.752 ================================ 00:14:26.752 Supported: No 00:14:26.752 00:14:26.752 Admin Command Set Attributes 00:14:26.752 ============================ 00:14:26.752 Security Send/Receive: Not Supported 00:14:26.752 Format NVM: Not Supported 00:14:26.752 Firmware Activate/Download: Not Supported 00:14:26.752 Namespace Management: Not Supported 00:14:26.752 Device Self-Test: Not Supported 00:14:26.752 Directives: Not Supported 00:14:26.752 NVMe-MI: Not Supported 00:14:26.752 Virtualization Management: Not Supported 00:14:26.752 Doorbell Buffer Config: Not Supported 00:14:26.752 Get LBA Status Capability: Not Supported 00:14:26.752 Command & Feature Lockdown Capability: Not Supported 00:14:26.752 Abort Command Limit: 1 00:14:26.752 Async Event Request Limit: 4 00:14:26.752 Number of Firmware Slots: N/A 00:14:26.752 Firmware Slot 1 Read-Only: N/A 00:14:26.752 Firm[2024-12-06 13:53:25.985713] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:26.752 [2024-12-06 13:53:25.985720] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:26.752 [2024-12-06 13:53:25.985723] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:26.752 [2024-12-06 13:53:25.985726] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1489750): datao=0, datal=8, cccid=4 00:14:26.752 [2024-12-06 13:53:25.985731] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14edd40) on tqpair(0x1489750): expected_datao=0, payload_size=8 00:14:26.752 [2024-12-06 13:53:25.985735] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:26.752 [2024-12-06 13:53:25.985741] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:26.752 [2024-12-06 13:53:25.985744] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:26.752 [2024-12-06 13:53:25.985759] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:26.752 [2024-12-06 13:53:25.985765] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:26.752 [2024-12-06 13:53:25.985768] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:26.752 [2024-12-06 13:53:25.985772] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14edd40) on tqpair=0x1489750 00:14:26.752 ware Activation Without Reset: N/A 00:14:26.752 Multiple Update Detection Support: N/A 00:14:26.752 Firmware Update Granularity: No Information Provided 00:14:26.752 Per-Namespace SMART Log: No 00:14:26.752 Asymmetric Namespace Access Log Page: Not Supported 00:14:26.752 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:14:26.752 Command Effects Log Page: Not Supported 00:14:26.752 Get Log Page Extended Data: Supported 00:14:26.752 Telemetry Log Pages: Not Supported 00:14:26.752 Persistent Event Log Pages: Not Supported 00:14:26.752 Supported Log Pages Log Page: May Support 00:14:26.752 Commands Supported & Effects Log Page: Not Supported 00:14:26.752 Feature Identifiers & Effects Log Page:May Support 00:14:26.752 NVMe-MI Commands & Effects Log Page: May Support 00:14:26.752 Data Area 4 for Telemetry Log: Not Supported 00:14:26.752 Error Log Page Entries Supported: 128 00:14:26.752 Keep Alive: Not Supported 00:14:26.752 00:14:26.752 NVM Command Set Attributes 00:14:26.752 ========================== 00:14:26.752 Submission Queue Entry Size 00:14:26.752 Max: 1 00:14:26.752 Min: 1 00:14:26.752 Completion Queue Entry Size 00:14:26.752 Max: 1 00:14:26.752 Min: 1 00:14:26.752 Number of Namespaces: 0 00:14:26.752 Compare Command: Not Supported 00:14:26.752 Write Uncorrectable Command: Not Supported 00:14:26.752 Dataset Management Command: Not Supported 00:14:26.752 Write Zeroes Command: Not Supported 00:14:26.752 Set Features Save Field: Not Supported 00:14:26.752 Reservations: Not Supported 00:14:26.752 Timestamp: Not Supported 00:14:26.752 Copy: Not Supported 00:14:26.752 Volatile Write Cache: Not Present 00:14:26.752 Atomic Write Unit (Normal): 1 00:14:26.752 Atomic Write Unit (PFail): 1 00:14:26.752 Atomic Compare & Write Unit: 1 00:14:26.752 Fused Compare & Write: Supported 00:14:26.752 Scatter-Gather List 00:14:26.752 SGL Command Set: Supported 00:14:26.752 SGL Keyed: Supported 00:14:26.752 SGL Bit Bucket Descriptor: Not Supported 00:14:26.752 SGL Metadata Pointer: Not Supported 00:14:26.752 Oversized SGL: Not Supported 00:14:26.752 SGL Metadata Address: Not Supported 00:14:26.752 SGL Offset: Supported 00:14:26.752 Transport SGL Data Block: Not Supported 00:14:26.752 Replay Protected Memory Block: Not Supported 00:14:26.752 00:14:26.752 Firmware Slot Information 00:14:26.752 ========================= 00:14:26.752 Active slot: 0 00:14:26.752 00:14:26.752 00:14:26.752 Error Log 00:14:26.752 ========= 00:14:26.752 00:14:26.752 Active Namespaces 00:14:26.752 ================= 00:14:26.752 Discovery Log Page 00:14:26.752 ================== 00:14:26.752 Generation Counter: 2 00:14:26.752 Number of Records: 2 00:14:26.752 Record Format: 0 00:14:26.752 00:14:26.752 Discovery Log Entry 0 00:14:26.752 ---------------------- 00:14:26.752 Transport Type: 3 (TCP) 00:14:26.752 Address Family: 1 (IPv4) 00:14:26.752 Subsystem Type: 3 (Current Discovery Subsystem) 00:14:26.752 Entry Flags: 00:14:26.752 Duplicate Returned Information: 1 00:14:26.752 Explicit Persistent Connection Support for Discovery: 1 00:14:26.752 Transport Requirements: 00:14:26.752 Secure Channel: Not Required 00:14:26.753 Port ID: 0 (0x0000) 00:14:26.753 Controller ID: 65535 (0xffff) 00:14:26.753 Admin Max SQ Size: 128 00:14:26.753 Transport Service Identifier: 4420 00:14:26.753 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:14:26.753 Transport Address: 10.0.0.3 00:14:26.753 Discovery Log Entry 1 00:14:26.753 ---------------------- 00:14:26.753 Transport Type: 3 (TCP) 00:14:26.753 Address Family: 1 (IPv4) 00:14:26.753 Subsystem Type: 2 (NVM Subsystem) 00:14:26.753 Entry Flags: 00:14:26.753 Duplicate Returned Information: 0 00:14:26.753 Explicit Persistent Connection Support for Discovery: 0 00:14:26.753 Transport Requirements: 00:14:26.753 Secure Channel: Not Required 00:14:26.753 Port ID: 0 (0x0000) 00:14:26.753 Controller ID: 65535 (0xffff) 00:14:26.753 Admin Max SQ Size: 128 00:14:26.753 Transport Service Identifier: 4420 00:14:26.753 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:14:26.753 Transport Address: 10.0.0.3 [2024-12-06 13:53:25.985862] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:14:26.753 [2024-12-06 13:53:25.985875] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14ed740) on tqpair=0x1489750 00:14:26.753 [2024-12-06 13:53:25.985882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.753 [2024-12-06 13:53:25.985887] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14ed8c0) on tqpair=0x1489750 00:14:26.753 [2024-12-06 13:53:25.985891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.753 [2024-12-06 13:53:25.985896] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14eda40) on tqpair=0x1489750 00:14:26.753 [2024-12-06 13:53:25.985901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.753 [2024-12-06 13:53:25.985905] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14edbc0) on tqpair=0x1489750 00:14:26.753 [2024-12-06 13:53:25.985910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:26.753 [2024-12-06 13:53:25.985918] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:26.753 [2024-12-06 13:53:25.985922] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:26.753 [2024-12-06 13:53:25.985925] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1489750) 00:14:26.753 [2024-12-06 13:53:25.985932] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:26.753 [2024-12-06 13:53:25.985954] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14edbc0, cid 3, qid 0 00:14:26.753 [2024-12-06 13:53:25.986003] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:26.753 [2024-12-06 13:53:25.986010] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:26.753 [2024-12-06 13:53:25.986013] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:26.753 [2024-12-06 13:53:25.986017] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14edbc0) on tqpair=0x1489750 00:14:26.753 [2024-12-06 13:53:25.986028] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:26.753 [2024-12-06 13:53:25.986033] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:26.753 [2024-12-06 13:53:25.986036] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1489750) 00:14:26.753 [2024-12-06 13:53:25.986043] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:26.753 [2024-12-06 13:53:25.986064] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14edbc0, cid 3, qid 0 00:14:26.753 [2024-12-06 13:53:25.986158] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:26.753 [2024-12-06 13:53:25.986167] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:26.753 [2024-12-06 13:53:25.986170] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:26.753 [2024-12-06 13:53:25.986174] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14edbc0) on tqpair=0x1489750 00:14:26.753 [2024-12-06 13:53:25.986179] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:14:26.753 [2024-12-06 13:53:25.986184] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:14:26.753 [2024-12-06 13:53:25.986193] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:26.753 [2024-12-06 13:53:25.986197] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:26.753 [2024-12-06 13:53:25.986201] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1489750) 00:14:26.753 [2024-12-06 13:53:25.986208] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:26.753 [2024-12-06 13:53:25.986227] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14edbc0, cid 3, qid 0 00:14:26.753 [2024-12-06 13:53:25.986291] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:26.753 [2024-12-06 13:53:25.986297] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:26.753 [2024-12-06 13:53:25.986301] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:26.753 [2024-12-06 13:53:25.986304] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14edbc0) on tqpair=0x1489750 00:14:26.753 [2024-12-06 13:53:25.986314] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:26.753 [2024-12-06 13:53:25.986319] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:26.753 [2024-12-06 13:53:25.986322] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1489750) 00:14:26.753 [2024-12-06 13:53:25.986329] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:26.753 [2024-12-06 13:53:25.986345] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14edbc0, cid 3, qid 0 00:14:26.753 [2024-12-06 13:53:25.986397] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:26.753 [2024-12-06 13:53:25.986403] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:26.753 [2024-12-06 13:53:25.986407] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:26.753 [2024-12-06 13:53:25.986410] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14edbc0) on tqpair=0x1489750 00:14:26.753 [2024-12-06 13:53:25.986420] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:26.753 [2024-12-06 13:53:25.986424] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:26.753 [2024-12-06 13:53:25.986427] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1489750) 00:14:26.753 [2024-12-06 13:53:25.986434] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:26.753 [2024-12-06 13:53:25.986450] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14edbc0, cid 3, qid 0 00:14:26.753 [2024-12-06 13:53:25.986517] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:26.753 [2024-12-06 13:53:25.986523] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:26.753 [2024-12-06 13:53:25.986527] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:26.753 [2024-12-06 13:53:25.986530] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14edbc0) on tqpair=0x1489750 00:14:26.753 [2024-12-06 13:53:25.986540] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:26.753 [2024-12-06 13:53:25.986544] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:26.753 [2024-12-06 13:53:25.986547] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1489750) 00:14:26.753 [2024-12-06 13:53:25.986554] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:26.753 [2024-12-06 13:53:25.986571] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14edbc0, cid 3, qid 0 00:14:26.753 [2024-12-06 13:53:25.986635] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:26.753 [2024-12-06 13:53:25.986641] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:26.753 [2024-12-06 13:53:25.986644] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:26.753 [2024-12-06 13:53:25.986648] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14edbc0) on tqpair=0x1489750 00:14:26.753 [2024-12-06 13:53:25.986658] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:26.753 [2024-12-06 13:53:25.986662] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:26.753 [2024-12-06 13:53:25.986665] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1489750) 00:14:26.753 [2024-12-06 13:53:25.986672] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:26.753 [2024-12-06 13:53:25.986689] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14edbc0, cid 3, qid 0 00:14:26.753 [2024-12-06 13:53:25.986738] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:26.753 [2024-12-06 13:53:25.986745] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:26.753 [2024-12-06 13:53:25.986748] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:26.753 [2024-12-06 13:53:25.986752] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14edbc0) on tqpair=0x1489750 00:14:26.753 [2024-12-06 13:53:25.986761] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:26.753 [2024-12-06 13:53:25.986766] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:26.753 [2024-12-06 13:53:25.986769] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1489750) 00:14:26.753 [2024-12-06 13:53:25.986776] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:26.753 [2024-12-06 13:53:25.986792] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14edbc0, cid 3, qid 0 00:14:26.753 [2024-12-06 13:53:25.986843] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:26.753 [2024-12-06 13:53:25.986851] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:26.753 [2024-12-06 13:53:25.986854] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:26.753 [2024-12-06 13:53:25.986858] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14edbc0) on tqpair=0x1489750 00:14:26.753 [2024-12-06 13:53:25.986868] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:26.753 [2024-12-06 13:53:25.986873] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:26.753 [2024-12-06 13:53:25.986876] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1489750) 00:14:26.753 [2024-12-06 13:53:25.986883] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:26.753 [2024-12-06 13:53:25.986899] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14edbc0, cid 3, qid 0 00:14:26.753 [2024-12-06 13:53:25.986958] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:26.754 [2024-12-06 13:53:25.986965] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:26.754 [2024-12-06 13:53:25.986969] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:26.754 [2024-12-06 13:53:25.986972] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14edbc0) on tqpair=0x1489750 00:14:26.754 [2024-12-06 13:53:25.986982] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:26.754 [2024-12-06 13:53:25.986986] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:26.754 [2024-12-06 13:53:25.986990] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1489750) 00:14:26.754 [2024-12-06 13:53:25.986997] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:26.754 [2024-12-06 13:53:25.987013] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14edbc0, cid 3, qid 0 00:14:26.754 [2024-12-06 13:53:25.987072] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:26.754 [2024-12-06 13:53:25.987078] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:26.754 [2024-12-06 13:53:25.987082] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:26.754 [2024-12-06 13:53:25.987085] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14edbc0) on tqpair=0x1489750 00:14:26.754 [2024-12-06 13:53:25.987095] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:26.754 [2024-12-06 13:53:25.987099] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:26.754 [2024-12-06 13:53:25.987104] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1489750) 00:14:26.754 [2024-12-06 13:53:25.987110] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:26.754 [2024-12-06 13:53:25.991180] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14edbc0, cid 3, qid 0 00:14:26.754 [2024-12-06 13:53:25.991239] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:26.754 [2024-12-06 13:53:25.991246] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:26.754 [2024-12-06 13:53:25.991250] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:26.754 [2024-12-06 13:53:25.991254] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14edbc0) on tqpair=0x1489750 00:14:26.754 [2024-12-06 13:53:25.991273] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 5 milliseconds 00:14:26.754 00:14:26.754 13:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:14:26.754 [2024-12-06 13:53:26.034370] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:14:26.754 [2024-12-06 13:53:26.034419] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74065 ] 00:14:27.015 [2024-12-06 13:53:26.189309] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:14:27.015 [2024-12-06 13:53:26.189378] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:14:27.015 [2024-12-06 13:53:26.189385] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:14:27.015 [2024-12-06 13:53:26.189396] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:14:27.015 [2024-12-06 13:53:26.189406] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:14:27.015 [2024-12-06 13:53:26.189663] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:14:27.015 [2024-12-06 13:53:26.189713] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xd3f750 0 00:14:27.015 [2024-12-06 13:53:26.204155] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:14:27.015 [2024-12-06 13:53:26.204188] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:14:27.015 [2024-12-06 13:53:26.204210] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:14:27.015 [2024-12-06 13:53:26.204214] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:14:27.015 [2024-12-06 13:53:26.204242] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.015 [2024-12-06 13:53:26.204248] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.015 [2024-12-06 13:53:26.204252] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd3f750) 00:14:27.015 [2024-12-06 13:53:26.204262] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:14:27.015 [2024-12-06 13:53:26.204291] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xda3740, cid 0, qid 0 00:14:27.015 [2024-12-06 13:53:26.214162] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.015 [2024-12-06 13:53:26.214213] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.015 [2024-12-06 13:53:26.214222] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.015 [2024-12-06 13:53:26.214228] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xda3740) on tqpair=0xd3f750 00:14:27.015 [2024-12-06 13:53:26.214238] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:14:27.015 [2024-12-06 13:53:26.214246] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:14:27.015 [2024-12-06 13:53:26.214252] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:14:27.015 [2024-12-06 13:53:26.214271] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.015 [2024-12-06 13:53:26.214276] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.015 [2024-12-06 13:53:26.214280] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd3f750) 00:14:27.015 [2024-12-06 13:53:26.214290] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:27.015 [2024-12-06 13:53:26.214321] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xda3740, cid 0, qid 0 00:14:27.015 [2024-12-06 13:53:26.214388] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.015 [2024-12-06 13:53:26.214395] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.015 [2024-12-06 13:53:26.214399] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.015 [2024-12-06 13:53:26.214403] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xda3740) on tqpair=0xd3f750 00:14:27.015 [2024-12-06 13:53:26.214409] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:14:27.015 [2024-12-06 13:53:26.214416] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:14:27.015 [2024-12-06 13:53:26.214424] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.015 [2024-12-06 13:53:26.214428] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.015 [2024-12-06 13:53:26.214432] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd3f750) 00:14:27.015 [2024-12-06 13:53:26.214439] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:27.015 [2024-12-06 13:53:26.214458] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xda3740, cid 0, qid 0 00:14:27.015 [2024-12-06 13:53:26.214553] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.015 [2024-12-06 13:53:26.214559] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.015 [2024-12-06 13:53:26.214563] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.015 [2024-12-06 13:53:26.214567] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xda3740) on tqpair=0xd3f750 00:14:27.015 [2024-12-06 13:53:26.214572] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:14:27.015 [2024-12-06 13:53:26.214580] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:14:27.015 [2024-12-06 13:53:26.214588] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.015 [2024-12-06 13:53:26.214591] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.015 [2024-12-06 13:53:26.214595] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd3f750) 00:14:27.015 [2024-12-06 13:53:26.214602] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:27.015 [2024-12-06 13:53:26.214619] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xda3740, cid 0, qid 0 00:14:27.015 [2024-12-06 13:53:26.214689] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.015 [2024-12-06 13:53:26.214698] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.015 [2024-12-06 13:53:26.214702] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.015 [2024-12-06 13:53:26.214706] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xda3740) on tqpair=0xd3f750 00:14:27.015 [2024-12-06 13:53:26.214712] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:27.015 [2024-12-06 13:53:26.214723] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.015 [2024-12-06 13:53:26.214727] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.015 [2024-12-06 13:53:26.214731] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd3f750) 00:14:27.015 [2024-12-06 13:53:26.214738] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:27.015 [2024-12-06 13:53:26.214758] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xda3740, cid 0, qid 0 00:14:27.015 [2024-12-06 13:53:26.214810] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.015 [2024-12-06 13:53:26.214817] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.015 [2024-12-06 13:53:26.214820] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.015 [2024-12-06 13:53:26.214824] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xda3740) on tqpair=0xd3f750 00:14:27.015 [2024-12-06 13:53:26.214829] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:14:27.015 [2024-12-06 13:53:26.214835] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:14:27.015 [2024-12-06 13:53:26.214842] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:27.015 [2024-12-06 13:53:26.214953] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:14:27.015 [2024-12-06 13:53:26.214961] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:27.015 [2024-12-06 13:53:26.214969] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.015 [2024-12-06 13:53:26.214988] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.015 [2024-12-06 13:53:26.214992] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd3f750) 00:14:27.015 [2024-12-06 13:53:26.214998] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:27.015 [2024-12-06 13:53:26.215017] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xda3740, cid 0, qid 0 00:14:27.015 [2024-12-06 13:53:26.215070] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.015 [2024-12-06 13:53:26.215076] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.015 [2024-12-06 13:53:26.215080] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.015 [2024-12-06 13:53:26.215084] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xda3740) on tqpair=0xd3f750 00:14:27.015 [2024-12-06 13:53:26.215089] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:27.015 [2024-12-06 13:53:26.215098] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.015 [2024-12-06 13:53:26.215103] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.016 [2024-12-06 13:53:26.215122] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd3f750) 00:14:27.016 [2024-12-06 13:53:26.215129] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:27.016 [2024-12-06 13:53:26.215161] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xda3740, cid 0, qid 0 00:14:27.016 [2024-12-06 13:53:26.215228] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.016 [2024-12-06 13:53:26.215237] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.016 [2024-12-06 13:53:26.215240] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.016 [2024-12-06 13:53:26.215244] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xda3740) on tqpair=0xd3f750 00:14:27.016 [2024-12-06 13:53:26.215249] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:27.016 [2024-12-06 13:53:26.215255] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:14:27.016 [2024-12-06 13:53:26.215263] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:14:27.016 [2024-12-06 13:53:26.215273] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:14:27.016 [2024-12-06 13:53:26.215283] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.016 [2024-12-06 13:53:26.215287] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd3f750) 00:14:27.016 [2024-12-06 13:53:26.215295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:27.016 [2024-12-06 13:53:26.215315] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xda3740, cid 0, qid 0 00:14:27.016 [2024-12-06 13:53:26.215474] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:27.016 [2024-12-06 13:53:26.215483] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:27.016 [2024-12-06 13:53:26.215487] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:27.016 [2024-12-06 13:53:26.215491] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd3f750): datao=0, datal=4096, cccid=0 00:14:27.016 [2024-12-06 13:53:26.215496] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xda3740) on tqpair(0xd3f750): expected_datao=0, payload_size=4096 00:14:27.016 [2024-12-06 13:53:26.215501] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.016 [2024-12-06 13:53:26.215509] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:27.016 [2024-12-06 13:53:26.215514] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:27.016 [2024-12-06 13:53:26.215522] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.016 [2024-12-06 13:53:26.215528] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.016 [2024-12-06 13:53:26.215532] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.016 [2024-12-06 13:53:26.215537] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xda3740) on tqpair=0xd3f750 00:14:27.016 [2024-12-06 13:53:26.215546] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:14:27.016 [2024-12-06 13:53:26.215551] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:14:27.016 [2024-12-06 13:53:26.215556] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:14:27.016 [2024-12-06 13:53:26.215565] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:14:27.016 [2024-12-06 13:53:26.215571] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:14:27.016 [2024-12-06 13:53:26.215576] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:14:27.016 [2024-12-06 13:53:26.215588] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:14:27.016 [2024-12-06 13:53:26.215597] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.016 [2024-12-06 13:53:26.215602] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.016 [2024-12-06 13:53:26.215613] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd3f750) 00:14:27.016 [2024-12-06 13:53:26.215621] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:27.016 [2024-12-06 13:53:26.215648] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xda3740, cid 0, qid 0 00:14:27.016 [2024-12-06 13:53:26.215733] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.016 [2024-12-06 13:53:26.215741] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.016 [2024-12-06 13:53:26.215756] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.016 [2024-12-06 13:53:26.215760] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xda3740) on tqpair=0xd3f750 00:14:27.016 [2024-12-06 13:53:26.215768] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.016 [2024-12-06 13:53:26.215772] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.016 [2024-12-06 13:53:26.215776] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd3f750) 00:14:27.016 [2024-12-06 13:53:26.215782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:27.016 [2024-12-06 13:53:26.215789] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.016 [2024-12-06 13:53:26.215792] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.016 [2024-12-06 13:53:26.215796] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xd3f750) 00:14:27.016 [2024-12-06 13:53:26.215802] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:27.016 [2024-12-06 13:53:26.215808] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.016 [2024-12-06 13:53:26.215811] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.016 [2024-12-06 13:53:26.215815] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xd3f750) 00:14:27.016 [2024-12-06 13:53:26.215821] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:27.016 [2024-12-06 13:53:26.215826] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.016 [2024-12-06 13:53:26.215830] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.016 [2024-12-06 13:53:26.215834] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd3f750) 00:14:27.016 [2024-12-06 13:53:26.215839] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:27.016 [2024-12-06 13:53:26.215845] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:14:27.016 [2024-12-06 13:53:26.215864] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:27.016 [2024-12-06 13:53:26.215871] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.016 [2024-12-06 13:53:26.215875] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd3f750) 00:14:27.016 [2024-12-06 13:53:26.215882] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:27.016 [2024-12-06 13:53:26.215902] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xda3740, cid 0, qid 0 00:14:27.016 [2024-12-06 13:53:26.215909] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xda38c0, cid 1, qid 0 00:14:27.016 [2024-12-06 13:53:26.215914] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xda3a40, cid 2, qid 0 00:14:27.016 [2024-12-06 13:53:26.215919] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xda3bc0, cid 3, qid 0 00:14:27.016 [2024-12-06 13:53:26.215924] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xda3d40, cid 4, qid 0 00:14:27.016 [2024-12-06 13:53:26.216032] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.016 [2024-12-06 13:53:26.216038] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.016 [2024-12-06 13:53:26.216042] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.016 [2024-12-06 13:53:26.216046] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xda3d40) on tqpair=0xd3f750 00:14:27.016 [2024-12-06 13:53:26.216055] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:14:27.016 [2024-12-06 13:53:26.216061] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:27.016 [2024-12-06 13:53:26.216070] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:14:27.016 [2024-12-06 13:53:26.216077] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:14:27.016 [2024-12-06 13:53:26.216084] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.016 [2024-12-06 13:53:26.216088] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.016 [2024-12-06 13:53:26.216092] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd3f750) 00:14:27.016 [2024-12-06 13:53:26.216099] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:27.016 [2024-12-06 13:53:26.216127] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xda3d40, cid 4, qid 0 00:14:27.016 [2024-12-06 13:53:26.216244] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.016 [2024-12-06 13:53:26.216253] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.016 [2024-12-06 13:53:26.216257] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.016 [2024-12-06 13:53:26.216261] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xda3d40) on tqpair=0xd3f750 00:14:27.016 [2024-12-06 13:53:26.216323] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:14:27.016 [2024-12-06 13:53:26.216335] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:14:27.016 [2024-12-06 13:53:26.216344] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.016 [2024-12-06 13:53:26.216348] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd3f750) 00:14:27.016 [2024-12-06 13:53:26.216356] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:27.016 [2024-12-06 13:53:26.216377] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xda3d40, cid 4, qid 0 00:14:27.016 [2024-12-06 13:53:26.216470] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:27.016 [2024-12-06 13:53:26.216477] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:27.016 [2024-12-06 13:53:26.216481] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:27.016 [2024-12-06 13:53:26.216485] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd3f750): datao=0, datal=4096, cccid=4 00:14:27.017 [2024-12-06 13:53:26.216490] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xda3d40) on tqpair(0xd3f750): expected_datao=0, payload_size=4096 00:14:27.017 [2024-12-06 13:53:26.216495] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.017 [2024-12-06 13:53:26.216502] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:27.017 [2024-12-06 13:53:26.216506] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:27.017 [2024-12-06 13:53:26.216515] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.017 [2024-12-06 13:53:26.216521] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.017 [2024-12-06 13:53:26.216524] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.017 [2024-12-06 13:53:26.216528] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xda3d40) on tqpair=0xd3f750 00:14:27.017 [2024-12-06 13:53:26.216561] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:14:27.017 [2024-12-06 13:53:26.216573] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:14:27.017 [2024-12-06 13:53:26.216584] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:14:27.017 [2024-12-06 13:53:26.216592] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.017 [2024-12-06 13:53:26.216596] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd3f750) 00:14:27.017 [2024-12-06 13:53:26.216603] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:27.017 [2024-12-06 13:53:26.216622] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xda3d40, cid 4, qid 0 00:14:27.017 [2024-12-06 13:53:26.216719] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:27.017 [2024-12-06 13:53:26.216729] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:27.017 [2024-12-06 13:53:26.216733] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:27.017 [2024-12-06 13:53:26.216737] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd3f750): datao=0, datal=4096, cccid=4 00:14:27.017 [2024-12-06 13:53:26.216742] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xda3d40) on tqpair(0xd3f750): expected_datao=0, payload_size=4096 00:14:27.017 [2024-12-06 13:53:26.216747] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.017 [2024-12-06 13:53:26.216754] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:27.017 [2024-12-06 13:53:26.216758] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:27.017 [2024-12-06 13:53:26.216767] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.017 [2024-12-06 13:53:26.216773] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.017 [2024-12-06 13:53:26.216777] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.017 [2024-12-06 13:53:26.216781] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xda3d40) on tqpair=0xd3f750 00:14:27.017 [2024-12-06 13:53:26.216798] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:27.017 [2024-12-06 13:53:26.216810] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:27.017 [2024-12-06 13:53:26.216820] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.017 [2024-12-06 13:53:26.216824] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd3f750) 00:14:27.017 [2024-12-06 13:53:26.216832] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:27.017 [2024-12-06 13:53:26.216853] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xda3d40, cid 4, qid 0 00:14:27.017 [2024-12-06 13:53:26.216918] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:27.017 [2024-12-06 13:53:26.216925] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:27.017 [2024-12-06 13:53:26.216929] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:27.017 [2024-12-06 13:53:26.216933] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd3f750): datao=0, datal=4096, cccid=4 00:14:27.017 [2024-12-06 13:53:26.216937] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xda3d40) on tqpair(0xd3f750): expected_datao=0, payload_size=4096 00:14:27.017 [2024-12-06 13:53:26.216942] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.017 [2024-12-06 13:53:26.216949] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:27.017 [2024-12-06 13:53:26.216953] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:27.017 [2024-12-06 13:53:26.216961] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.017 [2024-12-06 13:53:26.216968] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.017 [2024-12-06 13:53:26.216971] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.017 [2024-12-06 13:53:26.216976] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xda3d40) on tqpair=0xd3f750 00:14:27.017 [2024-12-06 13:53:26.216985] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:27.017 [2024-12-06 13:53:26.216994] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:14:27.017 [2024-12-06 13:53:26.217007] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:14:27.017 [2024-12-06 13:53:26.217029] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:14:27.017 [2024-12-06 13:53:26.217034] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:27.017 [2024-12-06 13:53:26.217040] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:14:27.017 [2024-12-06 13:53:26.217045] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:14:27.017 [2024-12-06 13:53:26.217050] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:14:27.017 [2024-12-06 13:53:26.217055] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:14:27.017 [2024-12-06 13:53:26.217069] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.017 [2024-12-06 13:53:26.217073] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd3f750) 00:14:27.017 [2024-12-06 13:53:26.217080] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:27.017 [2024-12-06 13:53:26.217087] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.017 [2024-12-06 13:53:26.217091] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.017 [2024-12-06 13:53:26.217095] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xd3f750) 00:14:27.017 [2024-12-06 13:53:26.217101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:14:27.017 [2024-12-06 13:53:26.217141] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xda3d40, cid 4, qid 0 00:14:27.017 [2024-12-06 13:53:26.217149] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xda3ec0, cid 5, qid 0 00:14:27.017 [2024-12-06 13:53:26.217243] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.017 [2024-12-06 13:53:26.217252] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.017 [2024-12-06 13:53:26.217256] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.017 [2024-12-06 13:53:26.217260] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xda3d40) on tqpair=0xd3f750 00:14:27.017 [2024-12-06 13:53:26.217267] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.017 [2024-12-06 13:53:26.217273] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.017 [2024-12-06 13:53:26.217278] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.017 [2024-12-06 13:53:26.217282] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xda3ec0) on tqpair=0xd3f750 00:14:27.017 [2024-12-06 13:53:26.217293] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.017 [2024-12-06 13:53:26.217297] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xd3f750) 00:14:27.017 [2024-12-06 13:53:26.217304] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:27.017 [2024-12-06 13:53:26.217324] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xda3ec0, cid 5, qid 0 00:14:27.017 [2024-12-06 13:53:26.217377] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.017 [2024-12-06 13:53:26.217384] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.017 [2024-12-06 13:53:26.217388] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.017 [2024-12-06 13:53:26.217392] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xda3ec0) on tqpair=0xd3f750 00:14:27.017 [2024-12-06 13:53:26.217402] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.017 [2024-12-06 13:53:26.217407] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xd3f750) 00:14:27.017 [2024-12-06 13:53:26.217414] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:27.017 [2024-12-06 13:53:26.217430] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xda3ec0, cid 5, qid 0 00:14:27.017 [2024-12-06 13:53:26.217531] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.017 [2024-12-06 13:53:26.217541] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.017 [2024-12-06 13:53:26.217545] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.017 [2024-12-06 13:53:26.217549] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xda3ec0) on tqpair=0xd3f750 00:14:27.017 [2024-12-06 13:53:26.217575] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.017 [2024-12-06 13:53:26.217580] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xd3f750) 00:14:27.017 [2024-12-06 13:53:26.217588] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:27.017 [2024-12-06 13:53:26.217610] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xda3ec0, cid 5, qid 0 00:14:27.017 [2024-12-06 13:53:26.217679] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.017 [2024-12-06 13:53:26.217700] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.017 [2024-12-06 13:53:26.217705] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.017 [2024-12-06 13:53:26.217709] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xda3ec0) on tqpair=0xd3f750 00:14:27.017 [2024-12-06 13:53:26.217747] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.017 [2024-12-06 13:53:26.217754] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xd3f750) 00:14:27.017 [2024-12-06 13:53:26.217762] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:27.017 [2024-12-06 13:53:26.217770] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.018 [2024-12-06 13:53:26.217774] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd3f750) 00:14:27.018 [2024-12-06 13:53:26.217781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:27.018 [2024-12-06 13:53:26.217789] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.018 [2024-12-06 13:53:26.217793] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xd3f750) 00:14:27.018 [2024-12-06 13:53:26.217800] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:27.018 [2024-12-06 13:53:26.217808] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.018 [2024-12-06 13:53:26.217812] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xd3f750) 00:14:27.018 [2024-12-06 13:53:26.217818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:27.018 [2024-12-06 13:53:26.217841] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xda3ec0, cid 5, qid 0 00:14:27.018 [2024-12-06 13:53:26.217849] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xda3d40, cid 4, qid 0 00:14:27.018 [2024-12-06 13:53:26.217854] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xda4040, cid 6, qid 0 00:14:27.018 [2024-12-06 13:53:26.217860] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xda41c0, cid 7, qid 0 00:14:27.018 [2024-12-06 13:53:26.218007] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:27.018 [2024-12-06 13:53:26.218019] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:27.018 [2024-12-06 13:53:26.218024] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:27.018 [2024-12-06 13:53:26.218028] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd3f750): datao=0, datal=8192, cccid=5 00:14:27.018 [2024-12-06 13:53:26.218033] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xda3ec0) on tqpair(0xd3f750): expected_datao=0, payload_size=8192 00:14:27.018 [2024-12-06 13:53:26.218038] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.018 [2024-12-06 13:53:26.218056] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:27.018 [2024-12-06 13:53:26.218062] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:27.018 [2024-12-06 13:53:26.218068] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:27.018 [2024-12-06 13:53:26.218074] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:27.018 [2024-12-06 13:53:26.218078] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:27.018 [2024-12-06 13:53:26.218082] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd3f750): datao=0, datal=512, cccid=4 00:14:27.018 [2024-12-06 13:53:26.218101] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xda3d40) on tqpair(0xd3f750): expected_datao=0, payload_size=512 00:14:27.018 [2024-12-06 13:53:26.218106] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.018 [2024-12-06 13:53:26.218125] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:27.018 [2024-12-06 13:53:26.218130] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:27.018 [2024-12-06 13:53:26.218136] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:27.018 [2024-12-06 13:53:26.218142] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:27.018 [2024-12-06 13:53:26.218146] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:27.018 [2024-12-06 13:53:26.218150] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd3f750): datao=0, datal=512, cccid=6 00:14:27.018 [2024-12-06 13:53:26.218154] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xda4040) on tqpair(0xd3f750): expected_datao=0, payload_size=512 00:14:27.018 [2024-12-06 13:53:26.218159] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.018 [2024-12-06 13:53:26.218165] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:27.018 [2024-12-06 13:53:26.218170] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:27.018 [2024-12-06 13:53:26.218175] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:27.018 [2024-12-06 13:53:26.218181] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:27.018 [2024-12-06 13:53:26.218185] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:27.018 [2024-12-06 13:53:26.218189] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd3f750): datao=0, datal=4096, cccid=7 00:14:27.018 [2024-12-06 13:53:26.218193] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xda41c0) on tqpair(0xd3f750): expected_datao=0, payload_size=4096 00:14:27.018 [2024-12-06 13:53:26.218197] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.018 [2024-12-06 13:53:26.218204] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:27.018 [2024-12-06 13:53:26.218209] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:27.018 [2024-12-06 13:53:26.218217] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.018 [2024-12-06 13:53:26.218223] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.018 [2024-12-06 13:53:26.218227] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.018 [2024-12-06 13:53:26.218240] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xda3ec0) on tqpair=0xd3f750 00:14:27.018 [2024-12-06 13:53:26.218256] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.018 [2024-12-06 13:53:26.218264] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.018 [2024-12-06 13:53:26.218267] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.018 [2024-12-06 13:53:26.218271] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xda3d40) on tqpair=0xd3f750 00:14:27.018 [2024-12-06 13:53:26.218283] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.018 [2024-12-06 13:53:26.218290] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.018 [2024-12-06 13:53:26.218293] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.018 [2024-12-06 13:53:26.218297] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xda4040) on tqpair=0xd3f750 00:14:27.018 [2024-12-06 13:53:26.218305] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.018 [2024-12-06 13:53:26.218311] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.018 [2024-12-06 13:53:26.218315] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.018 [2024-12-06 13:53:26.218319] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xda41c0) on tqpair=0xd3f750 00:14:27.018 ===================================================== 00:14:27.018 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:14:27.018 ===================================================== 00:14:27.018 Controller Capabilities/Features 00:14:27.018 ================================ 00:14:27.018 Vendor ID: 8086 00:14:27.018 Subsystem Vendor ID: 8086 00:14:27.018 Serial Number: SPDK00000000000001 00:14:27.018 Model Number: SPDK bdev Controller 00:14:27.018 Firmware Version: 25.01 00:14:27.018 Recommended Arb Burst: 6 00:14:27.018 IEEE OUI Identifier: e4 d2 5c 00:14:27.018 Multi-path I/O 00:14:27.018 May have multiple subsystem ports: Yes 00:14:27.018 May have multiple controllers: Yes 00:14:27.018 Associated with SR-IOV VF: No 00:14:27.018 Max Data Transfer Size: 131072 00:14:27.018 Max Number of Namespaces: 32 00:14:27.018 Max Number of I/O Queues: 127 00:14:27.018 NVMe Specification Version (VS): 1.3 00:14:27.018 NVMe Specification Version (Identify): 1.3 00:14:27.018 Maximum Queue Entries: 128 00:14:27.018 Contiguous Queues Required: Yes 00:14:27.018 Arbitration Mechanisms Supported 00:14:27.018 Weighted Round Robin: Not Supported 00:14:27.018 Vendor Specific: Not Supported 00:14:27.018 Reset Timeout: 15000 ms 00:14:27.018 Doorbell Stride: 4 bytes 00:14:27.018 NVM Subsystem Reset: Not Supported 00:14:27.018 Command Sets Supported 00:14:27.018 NVM Command Set: Supported 00:14:27.018 Boot Partition: Not Supported 00:14:27.018 Memory Page Size Minimum: 4096 bytes 00:14:27.018 Memory Page Size Maximum: 4096 bytes 00:14:27.018 Persistent Memory Region: Not Supported 00:14:27.018 Optional Asynchronous Events Supported 00:14:27.018 Namespace Attribute Notices: Supported 00:14:27.018 Firmware Activation Notices: Not Supported 00:14:27.018 ANA Change Notices: Not Supported 00:14:27.018 PLE Aggregate Log Change Notices: Not Supported 00:14:27.018 LBA Status Info Alert Notices: Not Supported 00:14:27.018 EGE Aggregate Log Change Notices: Not Supported 00:14:27.018 Normal NVM Subsystem Shutdown event: Not Supported 00:14:27.018 Zone Descriptor Change Notices: Not Supported 00:14:27.018 Discovery Log Change Notices: Not Supported 00:14:27.018 Controller Attributes 00:14:27.018 128-bit Host Identifier: Supported 00:14:27.018 Non-Operational Permissive Mode: Not Supported 00:14:27.018 NVM Sets: Not Supported 00:14:27.018 Read Recovery Levels: Not Supported 00:14:27.018 Endurance Groups: Not Supported 00:14:27.018 Predictable Latency Mode: Not Supported 00:14:27.018 Traffic Based Keep ALive: Not Supported 00:14:27.018 Namespace Granularity: Not Supported 00:14:27.018 SQ Associations: Not Supported 00:14:27.018 UUID List: Not Supported 00:14:27.018 Multi-Domain Subsystem: Not Supported 00:14:27.018 Fixed Capacity Management: Not Supported 00:14:27.018 Variable Capacity Management: Not Supported 00:14:27.018 Delete Endurance Group: Not Supported 00:14:27.018 Delete NVM Set: Not Supported 00:14:27.018 Extended LBA Formats Supported: Not Supported 00:14:27.018 Flexible Data Placement Supported: Not Supported 00:14:27.018 00:14:27.018 Controller Memory Buffer Support 00:14:27.018 ================================ 00:14:27.018 Supported: No 00:14:27.018 00:14:27.018 Persistent Memory Region Support 00:14:27.018 ================================ 00:14:27.018 Supported: No 00:14:27.018 00:14:27.018 Admin Command Set Attributes 00:14:27.018 ============================ 00:14:27.018 Security Send/Receive: Not Supported 00:14:27.018 Format NVM: Not Supported 00:14:27.018 Firmware Activate/Download: Not Supported 00:14:27.018 Namespace Management: Not Supported 00:14:27.018 Device Self-Test: Not Supported 00:14:27.018 Directives: Not Supported 00:14:27.018 NVMe-MI: Not Supported 00:14:27.018 Virtualization Management: Not Supported 00:14:27.018 Doorbell Buffer Config: Not Supported 00:14:27.019 Get LBA Status Capability: Not Supported 00:14:27.019 Command & Feature Lockdown Capability: Not Supported 00:14:27.019 Abort Command Limit: 4 00:14:27.019 Async Event Request Limit: 4 00:14:27.019 Number of Firmware Slots: N/A 00:14:27.019 Firmware Slot 1 Read-Only: N/A 00:14:27.019 Firmware Activation Without Reset: N/A 00:14:27.019 Multiple Update Detection Support: N/A 00:14:27.019 Firmware Update Granularity: No Information Provided 00:14:27.019 Per-Namespace SMART Log: No 00:14:27.019 Asymmetric Namespace Access Log Page: Not Supported 00:14:27.019 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:14:27.019 Command Effects Log Page: Supported 00:14:27.019 Get Log Page Extended Data: Supported 00:14:27.019 Telemetry Log Pages: Not Supported 00:14:27.019 Persistent Event Log Pages: Not Supported 00:14:27.019 Supported Log Pages Log Page: May Support 00:14:27.019 Commands Supported & Effects Log Page: Not Supported 00:14:27.019 Feature Identifiers & Effects Log Page:May Support 00:14:27.019 NVMe-MI Commands & Effects Log Page: May Support 00:14:27.019 Data Area 4 for Telemetry Log: Not Supported 00:14:27.019 Error Log Page Entries Supported: 128 00:14:27.019 Keep Alive: Supported 00:14:27.019 Keep Alive Granularity: 10000 ms 00:14:27.019 00:14:27.019 NVM Command Set Attributes 00:14:27.019 ========================== 00:14:27.019 Submission Queue Entry Size 00:14:27.019 Max: 64 00:14:27.019 Min: 64 00:14:27.019 Completion Queue Entry Size 00:14:27.019 Max: 16 00:14:27.019 Min: 16 00:14:27.019 Number of Namespaces: 32 00:14:27.019 Compare Command: Supported 00:14:27.019 Write Uncorrectable Command: Not Supported 00:14:27.019 Dataset Management Command: Supported 00:14:27.019 Write Zeroes Command: Supported 00:14:27.019 Set Features Save Field: Not Supported 00:14:27.019 Reservations: Supported 00:14:27.019 Timestamp: Not Supported 00:14:27.019 Copy: Supported 00:14:27.019 Volatile Write Cache: Present 00:14:27.019 Atomic Write Unit (Normal): 1 00:14:27.019 Atomic Write Unit (PFail): 1 00:14:27.019 Atomic Compare & Write Unit: 1 00:14:27.019 Fused Compare & Write: Supported 00:14:27.019 Scatter-Gather List 00:14:27.019 SGL Command Set: Supported 00:14:27.019 SGL Keyed: Supported 00:14:27.019 SGL Bit Bucket Descriptor: Not Supported 00:14:27.019 SGL Metadata Pointer: Not Supported 00:14:27.019 Oversized SGL: Not Supported 00:14:27.019 SGL Metadata Address: Not Supported 00:14:27.019 SGL Offset: Supported 00:14:27.019 Transport SGL Data Block: Not Supported 00:14:27.019 Replay Protected Memory Block: Not Supported 00:14:27.019 00:14:27.019 Firmware Slot Information 00:14:27.019 ========================= 00:14:27.019 Active slot: 1 00:14:27.019 Slot 1 Firmware Revision: 25.01 00:14:27.019 00:14:27.019 00:14:27.019 Commands Supported and Effects 00:14:27.019 ============================== 00:14:27.019 Admin Commands 00:14:27.019 -------------- 00:14:27.019 Get Log Page (02h): Supported 00:14:27.019 Identify (06h): Supported 00:14:27.019 Abort (08h): Supported 00:14:27.019 Set Features (09h): Supported 00:14:27.019 Get Features (0Ah): Supported 00:14:27.019 Asynchronous Event Request (0Ch): Supported 00:14:27.019 Keep Alive (18h): Supported 00:14:27.019 I/O Commands 00:14:27.019 ------------ 00:14:27.019 Flush (00h): Supported LBA-Change 00:14:27.019 Write (01h): Supported LBA-Change 00:14:27.019 Read (02h): Supported 00:14:27.019 Compare (05h): Supported 00:14:27.019 Write Zeroes (08h): Supported LBA-Change 00:14:27.019 Dataset Management (09h): Supported LBA-Change 00:14:27.019 Copy (19h): Supported LBA-Change 00:14:27.019 00:14:27.019 Error Log 00:14:27.019 ========= 00:14:27.019 00:14:27.019 Arbitration 00:14:27.019 =========== 00:14:27.019 Arbitration Burst: 1 00:14:27.019 00:14:27.019 Power Management 00:14:27.019 ================ 00:14:27.019 Number of Power States: 1 00:14:27.019 Current Power State: Power State #0 00:14:27.019 Power State #0: 00:14:27.019 Max Power: 0.00 W 00:14:27.019 Non-Operational State: Operational 00:14:27.019 Entry Latency: Not Reported 00:14:27.019 Exit Latency: Not Reported 00:14:27.019 Relative Read Throughput: 0 00:14:27.019 Relative Read Latency: 0 00:14:27.019 Relative Write Throughput: 0 00:14:27.019 Relative Write Latency: 0 00:14:27.019 Idle Power: Not Reported 00:14:27.019 Active Power: Not Reported 00:14:27.019 Non-Operational Permissive Mode: Not Supported 00:14:27.019 00:14:27.019 Health Information 00:14:27.019 ================== 00:14:27.019 Critical Warnings: 00:14:27.019 Available Spare Space: OK 00:14:27.019 Temperature: OK 00:14:27.019 Device Reliability: OK 00:14:27.019 Read Only: No 00:14:27.019 Volatile Memory Backup: OK 00:14:27.019 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:27.019 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:27.019 Available Spare: 0% 00:14:27.019 Available Spare Threshold: 0% 00:14:27.019 Life Percentage Used:[2024-12-06 13:53:26.218417] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.019 [2024-12-06 13:53:26.218424] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xd3f750) 00:14:27.019 [2024-12-06 13:53:26.218432] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:27.019 [2024-12-06 13:53:26.218456] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xda41c0, cid 7, qid 0 00:14:27.019 [2024-12-06 13:53:26.218523] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.019 [2024-12-06 13:53:26.218530] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.019 [2024-12-06 13:53:26.218534] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.019 [2024-12-06 13:53:26.218538] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xda41c0) on tqpair=0xd3f750 00:14:27.019 [2024-12-06 13:53:26.218576] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:14:27.019 [2024-12-06 13:53:26.218587] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xda3740) on tqpair=0xd3f750 00:14:27.019 [2024-12-06 13:53:26.218594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:27.019 [2024-12-06 13:53:26.218600] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xda38c0) on tqpair=0xd3f750 00:14:27.019 [2024-12-06 13:53:26.218605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:27.019 [2024-12-06 13:53:26.218610] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xda3a40) on tqpair=0xd3f750 00:14:27.019 [2024-12-06 13:53:26.218615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:27.019 [2024-12-06 13:53:26.218620] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xda3bc0) on tqpair=0xd3f750 00:14:27.019 [2024-12-06 13:53:26.218625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:27.019 [2024-12-06 13:53:26.218635] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.019 [2024-12-06 13:53:26.218658] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.019 [2024-12-06 13:53:26.218665] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd3f750) 00:14:27.019 [2024-12-06 13:53:26.218677] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:27.019 [2024-12-06 13:53:26.218705] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xda3bc0, cid 3, qid 0 00:14:27.019 [2024-12-06 13:53:26.218757] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.019 [2024-12-06 13:53:26.218764] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.019 [2024-12-06 13:53:26.218768] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.019 [2024-12-06 13:53:26.218772] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xda3bc0) on tqpair=0xd3f750 00:14:27.019 [2024-12-06 13:53:26.218780] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.020 [2024-12-06 13:53:26.218784] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.020 [2024-12-06 13:53:26.218788] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd3f750) 00:14:27.020 [2024-12-06 13:53:26.218796] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:27.020 [2024-12-06 13:53:26.218817] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xda3bc0, cid 3, qid 0 00:14:27.020 [2024-12-06 13:53:26.218886] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.020 [2024-12-06 13:53:26.218893] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.020 [2024-12-06 13:53:26.218897] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.020 [2024-12-06 13:53:26.218901] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xda3bc0) on tqpair=0xd3f750 00:14:27.020 [2024-12-06 13:53:26.218907] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:14:27.020 [2024-12-06 13:53:26.218912] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:14:27.020 [2024-12-06 13:53:26.218922] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.020 [2024-12-06 13:53:26.218927] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.020 [2024-12-06 13:53:26.218930] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd3f750) 00:14:27.020 [2024-12-06 13:53:26.218938] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:27.020 [2024-12-06 13:53:26.218970] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xda3bc0, cid 3, qid 0 00:14:27.020 [2024-12-06 13:53:26.219034] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.020 [2024-12-06 13:53:26.219041] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.020 [2024-12-06 13:53:26.219044] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.020 [2024-12-06 13:53:26.219049] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xda3bc0) on tqpair=0xd3f750 00:14:27.020 [2024-12-06 13:53:26.219059] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.020 [2024-12-06 13:53:26.219064] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.020 [2024-12-06 13:53:26.219067] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd3f750) 00:14:27.020 [2024-12-06 13:53:26.219075] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:27.020 [2024-12-06 13:53:26.219091] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xda3bc0, cid 3, qid 0 00:14:27.020 [2024-12-06 13:53:26.225264] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.020 [2024-12-06 13:53:26.225288] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.020 [2024-12-06 13:53:26.225309] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.020 [2024-12-06 13:53:26.225313] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xda3bc0) on tqpair=0xd3f750 00:14:27.020 [2024-12-06 13:53:26.225326] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:27.020 [2024-12-06 13:53:26.225331] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:27.020 [2024-12-06 13:53:26.225335] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd3f750) 00:14:27.020 [2024-12-06 13:53:26.225344] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:27.020 [2024-12-06 13:53:26.225369] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xda3bc0, cid 3, qid 0 00:14:27.020 [2024-12-06 13:53:26.225423] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:27.020 [2024-12-06 13:53:26.225429] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:27.020 [2024-12-06 13:53:26.225433] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:27.020 [2024-12-06 13:53:26.225437] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xda3bc0) on tqpair=0xd3f750 00:14:27.020 [2024-12-06 13:53:26.225445] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 6 milliseconds 00:14:27.020 0% 00:14:27.020 Data Units Read: 0 00:14:27.020 Data Units Written: 0 00:14:27.020 Host Read Commands: 0 00:14:27.020 Host Write Commands: 0 00:14:27.020 Controller Busy Time: 0 minutes 00:14:27.020 Power Cycles: 0 00:14:27.020 Power On Hours: 0 hours 00:14:27.020 Unsafe Shutdowns: 0 00:14:27.020 Unrecoverable Media Errors: 0 00:14:27.020 Lifetime Error Log Entries: 0 00:14:27.020 Warning Temperature Time: 0 minutes 00:14:27.020 Critical Temperature Time: 0 minutes 00:14:27.020 00:14:27.020 Number of Queues 00:14:27.020 ================ 00:14:27.020 Number of I/O Submission Queues: 127 00:14:27.020 Number of I/O Completion Queues: 127 00:14:27.020 00:14:27.020 Active Namespaces 00:14:27.020 ================= 00:14:27.020 Namespace ID:1 00:14:27.020 Error Recovery Timeout: Unlimited 00:14:27.020 Command Set Identifier: NVM (00h) 00:14:27.020 Deallocate: Supported 00:14:27.020 Deallocated/Unwritten Error: Not Supported 00:14:27.020 Deallocated Read Value: Unknown 00:14:27.020 Deallocate in Write Zeroes: Not Supported 00:14:27.020 Deallocated Guard Field: 0xFFFF 00:14:27.020 Flush: Supported 00:14:27.020 Reservation: Supported 00:14:27.020 Namespace Sharing Capabilities: Multiple Controllers 00:14:27.020 Size (in LBAs): 131072 (0GiB) 00:14:27.020 Capacity (in LBAs): 131072 (0GiB) 00:14:27.020 Utilization (in LBAs): 131072 (0GiB) 00:14:27.020 NGUID: ABCDEF0123456789ABCDEF0123456789 00:14:27.020 EUI64: ABCDEF0123456789 00:14:27.020 UUID: c00c470d-d70f-4d42-9c9d-db6d7a9cdd93 00:14:27.020 Thin Provisioning: Not Supported 00:14:27.020 Per-NS Atomic Units: Yes 00:14:27.020 Atomic Boundary Size (Normal): 0 00:14:27.020 Atomic Boundary Size (PFail): 0 00:14:27.020 Atomic Boundary Offset: 0 00:14:27.020 Maximum Single Source Range Length: 65535 00:14:27.020 Maximum Copy Length: 65535 00:14:27.020 Maximum Source Range Count: 1 00:14:27.020 NGUID/EUI64 Never Reused: No 00:14:27.020 Namespace Write Protected: No 00:14:27.020 Number of LBA Formats: 1 00:14:27.020 Current LBA Format: LBA Format #00 00:14:27.020 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:27.020 00:14:27.020 13:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:14:27.020 13:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:27.020 13:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.020 13:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:27.020 13:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.020 13:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:14:27.020 13:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:14:27.020 13:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:27.020 13:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:14:27.020 13:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:27.020 13:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:14:27.020 13:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:27.020 13:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:27.020 rmmod nvme_tcp 00:14:27.020 rmmod nvme_fabrics 00:14:27.020 rmmod nvme_keyring 00:14:27.020 13:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:27.020 13:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:14:27.020 13:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:14:27.020 13:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 74030 ']' 00:14:27.020 13:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 74030 00:14:27.020 13:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 74030 ']' 00:14:27.020 13:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 74030 00:14:27.020 13:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:14:27.020 13:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:27.020 13:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74030 00:14:27.020 killing process with pid 74030 00:14:27.020 13:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:27.020 13:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:27.020 13:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74030' 00:14:27.020 13:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 74030 00:14:27.020 13:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 74030 00:14:27.278 13:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:27.278 13:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:27.278 13:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:27.278 13:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:14:27.278 13:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:14:27.278 13:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:14:27.278 13:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:27.278 13:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:27.278 13:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:27.278 13:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:27.279 13:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:27.279 13:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:27.279 13:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:27.537 13:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:27.537 13:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:27.537 13:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:27.537 13:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:27.537 13:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:27.537 13:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:27.537 13:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:27.537 13:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:27.537 13:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:27.537 13:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:27.537 13:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:27.537 13:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:27.537 13:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:27.537 13:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0 00:14:27.537 00:14:27.537 real 0m2.292s 00:14:27.537 user 0m4.629s 00:14:27.537 sys 0m0.767s 00:14:27.537 ************************************ 00:14:27.537 END TEST nvmf_identify 00:14:27.537 ************************************ 00:14:27.537 13:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:27.537 13:53:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:27.537 13:53:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:14:27.537 13:53:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:27.537 13:53:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:27.537 13:53:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:14:27.537 ************************************ 00:14:27.537 START TEST nvmf_perf 00:14:27.537 ************************************ 00:14:27.537 13:53:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:14:27.796 * Looking for test storage... 00:14:27.796 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:27.796 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:27.796 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:14:27.796 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:27.796 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:27.796 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:27.796 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:27.796 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:27.796 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:14:27.796 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:14:27.796 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:14:27.796 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:14:27.796 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:14:27.796 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:14:27.796 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:14:27.796 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:27.796 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:14:27.796 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:14:27.796 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:27.796 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:27.796 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:14:27.796 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:14:27.796 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:27.796 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:14:27.796 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:14:27.796 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:14:27.796 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:14:27.796 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:27.796 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:14:27.796 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:14:27.796 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:27.796 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:27.796 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:14:27.796 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:27.796 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:27.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:27.796 --rc genhtml_branch_coverage=1 00:14:27.796 --rc genhtml_function_coverage=1 00:14:27.796 --rc genhtml_legend=1 00:14:27.796 --rc geninfo_all_blocks=1 00:14:27.796 --rc geninfo_unexecuted_blocks=1 00:14:27.796 00:14:27.796 ' 00:14:27.796 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:27.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:27.796 --rc genhtml_branch_coverage=1 00:14:27.796 --rc genhtml_function_coverage=1 00:14:27.796 --rc genhtml_legend=1 00:14:27.796 --rc geninfo_all_blocks=1 00:14:27.796 --rc geninfo_unexecuted_blocks=1 00:14:27.796 00:14:27.796 ' 00:14:27.796 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:27.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:27.796 --rc genhtml_branch_coverage=1 00:14:27.796 --rc genhtml_function_coverage=1 00:14:27.796 --rc genhtml_legend=1 00:14:27.796 --rc geninfo_all_blocks=1 00:14:27.796 --rc geninfo_unexecuted_blocks=1 00:14:27.796 00:14:27.796 ' 00:14:27.796 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:27.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:27.796 --rc genhtml_branch_coverage=1 00:14:27.796 --rc genhtml_function_coverage=1 00:14:27.796 --rc genhtml_legend=1 00:14:27.796 --rc geninfo_all_blocks=1 00:14:27.796 --rc geninfo_unexecuted_blocks=1 00:14:27.796 00:14:27.796 ' 00:14:27.796 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:27.796 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:14:27.796 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:27.796 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:27.796 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:27.796 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:27.796 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:27.796 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:27.796 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:27.796 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:27.796 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:27.796 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:27.797 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:14:27.797 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=cfa2def7-c8af-457f-82a0-b312efdea7f4 00:14:27.797 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:27.797 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:27.797 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:27.797 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:27.797 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:27.797 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:14:27.797 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:27.797 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:27.797 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:27.797 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.797 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.797 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.797 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:14:27.797 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.797 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:14:27.797 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:27.797 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:27.797 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:27.797 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:27.797 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:27.797 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:27.797 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:27.797 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:27.797 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:27.797 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:27.797 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:27.797 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:27.797 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:27.797 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:14:27.797 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:27.797 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:27.797 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:27.797 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:27.797 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:27.797 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:27.797 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:27.797 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:27.797 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:27.797 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:27.797 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:27.797 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:27.797 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:27.797 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:27.797 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:27.797 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:27.797 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:27.797 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:27.797 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:27.797 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:27.797 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:27.797 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:27.797 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:27.797 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:27.797 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:27.797 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:27.797 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:27.797 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:27.797 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:27.797 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:27.797 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:27.797 Cannot find device "nvmf_init_br" 00:14:27.797 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:14:27.797 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:27.797 Cannot find device "nvmf_init_br2" 00:14:27.797 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:14:27.797 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:27.797 Cannot find device "nvmf_tgt_br" 00:14:27.797 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 00:14:27.797 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:27.797 Cannot find device "nvmf_tgt_br2" 00:14:27.797 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 00:14:27.797 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:27.797 Cannot find device "nvmf_init_br" 00:14:27.797 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 00:14:27.797 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:27.797 Cannot find device "nvmf_init_br2" 00:14:27.797 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 00:14:27.797 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:28.056 Cannot find device "nvmf_tgt_br" 00:14:28.056 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 00:14:28.057 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:28.057 Cannot find device "nvmf_tgt_br2" 00:14:28.057 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 00:14:28.057 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:28.057 Cannot find device "nvmf_br" 00:14:28.057 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 00:14:28.057 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:28.057 Cannot find device "nvmf_init_if" 00:14:28.057 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true 00:14:28.057 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:28.057 Cannot find device "nvmf_init_if2" 00:14:28.057 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true 00:14:28.057 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:28.057 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:28.057 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true 00:14:28.057 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:28.057 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:28.057 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true 00:14:28.057 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:28.057 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:28.057 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:28.057 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:28.057 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:28.057 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:28.057 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:28.057 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:28.057 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:28.057 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:28.057 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:28.057 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:28.057 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:28.057 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:28.057 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:28.057 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:28.057 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:28.057 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:28.057 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:28.057 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:28.057 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:28.057 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:28.057 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:28.057 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:28.315 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:28.315 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:28.315 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:28.315 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:28.315 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:28.315 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:28.315 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:28.315 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:28.315 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:28.315 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:28.315 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.104 ms 00:14:28.315 00:14:28.315 --- 10.0.0.3 ping statistics --- 00:14:28.315 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:28.315 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:14:28.315 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:28.315 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:28.315 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.057 ms 00:14:28.315 00:14:28.320 --- 10.0.0.4 ping statistics --- 00:14:28.320 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:28.320 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:14:28.320 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:28.320 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:28.320 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.047 ms 00:14:28.320 00:14:28.320 --- 10.0.0.1 ping statistics --- 00:14:28.320 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:28.320 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:14:28.320 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:28.320 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:28.320 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:14:28.320 00:14:28.320 --- 10.0.0.2 ping statistics --- 00:14:28.320 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:28.320 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:14:28.320 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:28.320 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@461 -- # return 0 00:14:28.320 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:28.320 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:28.320 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:28.320 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:28.320 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:28.320 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:28.320 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:28.320 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:14:28.320 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:28.320 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:28.320 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:14:28.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:28.320 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=74289 00:14:28.320 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:28.320 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 74289 00:14:28.320 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 74289 ']' 00:14:28.320 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:28.320 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:28.320 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:28.320 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:28.320 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:14:28.320 [2024-12-06 13:53:27.606255] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:14:28.320 [2024-12-06 13:53:27.606500] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:28.580 [2024-12-06 13:53:27.753185] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:28.580 [2024-12-06 13:53:27.795636] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:28.580 [2024-12-06 13:53:27.795948] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:28.580 [2024-12-06 13:53:27.796184] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:28.580 [2024-12-06 13:53:27.796284] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:28.580 [2024-12-06 13:53:27.796338] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:28.580 [2024-12-06 13:53:27.797603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:28.580 [2024-12-06 13:53:27.797711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:28.580 [2024-12-06 13:53:27.797782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:28.580 [2024-12-06 13:53:27.797784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:28.580 [2024-12-06 13:53:27.849424] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:28.580 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:28.580 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:14:28.580 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:28.580 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:28.580 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:14:28.580 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:28.580 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:14:28.580 13:53:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:29.144 13:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:14:29.144 13:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:14:29.402 13:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:14:29.402 13:53:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:29.660 13:53:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:14:29.660 13:53:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:14:29.660 13:53:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:14:29.660 13:53:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:14:29.660 13:53:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:29.918 [2024-12-06 13:53:29.293062] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:30.177 13:53:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:30.177 13:53:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:14:30.177 13:53:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:30.435 13:53:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:14:30.435 13:53:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:14:30.694 13:53:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:30.953 [2024-12-06 13:53:30.312416] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:30.953 13:53:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:14:31.212 13:53:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:14:31.212 13:53:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:14:31.212 13:53:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:14:31.212 13:53:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:14:32.620 Initializing NVMe Controllers 00:14:32.620 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:14:32.620 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:14:32.620 Initialization complete. Launching workers. 00:14:32.620 ======================================================== 00:14:32.621 Latency(us) 00:14:32.621 Device Information : IOPS MiB/s Average min max 00:14:32.621 PCIE (0000:00:10.0) NSID 1 from core 0: 24056.40 93.97 1329.69 326.79 8974.59 00:14:32.621 ======================================================== 00:14:32.621 Total : 24056.40 93.97 1329.69 326.79 8974.59 00:14:32.621 00:14:32.621 13:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:14:33.559 Initializing NVMe Controllers 00:14:33.559 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:14:33.559 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:33.559 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:33.559 Initialization complete. Launching workers. 00:14:33.559 ======================================================== 00:14:33.559 Latency(us) 00:14:33.559 Device Information : IOPS MiB/s Average min max 00:14:33.559 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3868.96 15.11 258.18 98.21 7193.76 00:14:33.559 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 121.00 0.47 8313.20 5058.23 14959.93 00:14:33.559 ======================================================== 00:14:33.559 Total : 3989.96 15.59 502.45 98.21 14959.93 00:14:33.559 00:14:33.818 13:53:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:14:35.195 Initializing NVMe Controllers 00:14:35.195 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:14:35.195 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:35.195 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:35.195 Initialization complete. Launching workers. 00:14:35.195 ======================================================== 00:14:35.195 Latency(us) 00:14:35.195 Device Information : IOPS MiB/s Average min max 00:14:35.195 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8075.22 31.54 3968.76 663.50 10847.44 00:14:35.195 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3218.69 12.57 10008.66 6351.23 16624.65 00:14:35.195 ======================================================== 00:14:35.195 Total : 11293.92 44.12 5690.09 663.50 16624.65 00:14:35.195 00:14:35.195 13:53:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:14:35.195 13:53:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:14:37.727 Initializing NVMe Controllers 00:14:37.727 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:14:37.727 Controller IO queue size 128, less than required. 00:14:37.727 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:37.727 Controller IO queue size 128, less than required. 00:14:37.727 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:37.727 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:37.728 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:37.728 Initialization complete. Launching workers. 00:14:37.728 ======================================================== 00:14:37.728 Latency(us) 00:14:37.728 Device Information : IOPS MiB/s Average min max 00:14:37.728 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1754.56 438.64 73848.79 44650.46 113203.09 00:14:37.728 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 620.11 155.03 211349.09 92373.97 318949.34 00:14:37.728 ======================================================== 00:14:37.728 Total : 2374.67 593.67 109754.86 44650.46 318949.34 00:14:37.728 00:14:37.728 13:53:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 00:14:37.986 Initializing NVMe Controllers 00:14:37.986 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:14:37.986 Controller IO queue size 128, less than required. 00:14:37.986 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:37.986 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:14:37.986 Controller IO queue size 128, less than required. 00:14:37.986 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:37.986 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:14:37.986 WARNING: Some requested NVMe devices were skipped 00:14:37.986 No valid NVMe controllers or AIO or URING devices found 00:14:37.986 13:53:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 00:14:40.521 Initializing NVMe Controllers 00:14:40.521 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:14:40.521 Controller IO queue size 128, less than required. 00:14:40.521 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:40.521 Controller IO queue size 128, less than required. 00:14:40.521 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:40.521 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:40.521 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:40.521 Initialization complete. Launching workers. 00:14:40.521 00:14:40.521 ==================== 00:14:40.521 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:14:40.521 TCP transport: 00:14:40.521 polls: 8982 00:14:40.521 idle_polls: 5509 00:14:40.521 sock_completions: 3473 00:14:40.521 nvme_completions: 5569 00:14:40.521 submitted_requests: 8328 00:14:40.521 queued_requests: 1 00:14:40.521 00:14:40.521 ==================== 00:14:40.521 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:14:40.521 TCP transport: 00:14:40.521 polls: 12570 00:14:40.521 idle_polls: 8610 00:14:40.521 sock_completions: 3960 00:14:40.521 nvme_completions: 5925 00:14:40.521 submitted_requests: 8802 00:14:40.521 queued_requests: 1 00:14:40.521 ======================================================== 00:14:40.521 Latency(us) 00:14:40.521 Device Information : IOPS MiB/s Average min max 00:14:40.521 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1391.68 347.92 93296.20 53356.25 152100.72 00:14:40.521 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1480.66 370.17 87901.72 43117.27 134050.87 00:14:40.521 ======================================================== 00:14:40.521 Total : 2872.34 718.09 90515.40 43117.27 152100.72 00:14:40.521 00:14:40.521 13:53:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:14:40.521 13:53:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:40.780 13:53:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:14:40.780 13:53:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:14:40.780 13:53:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:14:40.780 13:53:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:40.780 13:53:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:14:40.780 13:53:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:40.780 13:53:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:14:40.780 13:53:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:40.780 13:53:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:40.780 rmmod nvme_tcp 00:14:40.780 rmmod nvme_fabrics 00:14:41.039 rmmod nvme_keyring 00:14:41.039 13:53:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:41.039 13:53:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:14:41.039 13:53:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:14:41.039 13:53:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 74289 ']' 00:14:41.039 13:53:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 74289 00:14:41.039 13:53:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 74289 ']' 00:14:41.039 13:53:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 74289 00:14:41.039 13:53:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:14:41.039 13:53:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:41.039 13:53:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74289 00:14:41.039 killing process with pid 74289 00:14:41.039 13:53:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:41.039 13:53:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:41.039 13:53:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74289' 00:14:41.039 13:53:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 74289 00:14:41.039 13:53:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 74289 00:14:41.607 13:53:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:41.607 13:53:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:41.607 13:53:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:41.607 13:53:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:14:41.607 13:53:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:14:41.607 13:53:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:41.607 13:53:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:14:41.607 13:53:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:41.607 13:53:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:41.607 13:53:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:41.607 13:53:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:41.607 13:53:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:41.607 13:53:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:41.607 13:53:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:41.866 13:53:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:41.866 13:53:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:41.866 13:53:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:41.866 13:53:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:41.866 13:53:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:41.866 13:53:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:41.866 13:53:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:41.866 13:53:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:41.866 13:53:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:41.866 13:53:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:41.866 13:53:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:41.866 13:53:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:41.866 13:53:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0 00:14:41.866 ************************************ 00:14:41.866 END TEST nvmf_perf 00:14:41.866 ************************************ 00:14:41.866 00:14:41.866 real 0m14.257s 00:14:41.866 user 0m51.136s 00:14:41.866 sys 0m4.386s 00:14:41.866 13:53:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:41.866 13:53:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:14:41.866 13:53:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:14:41.866 13:53:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:41.866 13:53:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:41.866 13:53:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:14:41.866 ************************************ 00:14:41.866 START TEST nvmf_fio_host 00:14:41.866 ************************************ 00:14:41.866 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:14:42.126 * Looking for test storage... 00:14:42.126 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:42.126 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:42.126 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:14:42.126 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:42.126 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:42.126 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:42.126 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:42.126 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:42.126 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:14:42.126 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:14:42.126 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:14:42.126 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:14:42.126 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:14:42.126 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:14:42.126 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:14:42.126 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:42.126 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:14:42.126 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:14:42.126 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:42.126 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:42.126 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:14:42.126 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:14:42.126 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:42.126 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:14:42.126 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:14:42.126 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:14:42.126 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:14:42.126 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:42.126 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:14:42.126 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:14:42.126 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:42.126 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:42.126 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:14:42.126 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:42.126 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:42.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:42.126 --rc genhtml_branch_coverage=1 00:14:42.126 --rc genhtml_function_coverage=1 00:14:42.126 --rc genhtml_legend=1 00:14:42.126 --rc geninfo_all_blocks=1 00:14:42.126 --rc geninfo_unexecuted_blocks=1 00:14:42.126 00:14:42.126 ' 00:14:42.126 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:42.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:42.126 --rc genhtml_branch_coverage=1 00:14:42.126 --rc genhtml_function_coverage=1 00:14:42.126 --rc genhtml_legend=1 00:14:42.126 --rc geninfo_all_blocks=1 00:14:42.126 --rc geninfo_unexecuted_blocks=1 00:14:42.126 00:14:42.126 ' 00:14:42.126 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:42.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:42.126 --rc genhtml_branch_coverage=1 00:14:42.126 --rc genhtml_function_coverage=1 00:14:42.126 --rc genhtml_legend=1 00:14:42.126 --rc geninfo_all_blocks=1 00:14:42.126 --rc geninfo_unexecuted_blocks=1 00:14:42.126 00:14:42.126 ' 00:14:42.126 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:42.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:42.126 --rc genhtml_branch_coverage=1 00:14:42.126 --rc genhtml_function_coverage=1 00:14:42.126 --rc genhtml_legend=1 00:14:42.126 --rc geninfo_all_blocks=1 00:14:42.126 --rc geninfo_unexecuted_blocks=1 00:14:42.126 00:14:42.126 ' 00:14:42.126 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:42.126 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:14:42.126 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:42.126 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:42.126 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:42.126 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.126 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.126 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.126 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:14:42.126 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.126 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:42.126 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:14:42.126 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:42.126 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:42.126 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:42.126 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:42.126 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:42.126 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:42.126 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:42.126 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:42.126 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:42.126 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:42.126 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:14:42.126 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=cfa2def7-c8af-457f-82a0-b312efdea7f4 00:14:42.126 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:42.126 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:42.126 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:42.126 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:42.126 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:42.126 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:14:42.126 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:42.126 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:42.126 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:42.127 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.127 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.127 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.127 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:14:42.127 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.127 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:14:42.127 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:42.127 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:42.127 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:42.127 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:42.127 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:42.127 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:42.127 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:42.127 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:42.127 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:42.127 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:42.127 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:42.127 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:14:42.127 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:42.127 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:42.127 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:42.127 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:42.127 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:42.127 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:42.127 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:42.127 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:42.127 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:42.127 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:42.127 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:42.127 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:42.127 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:42.127 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:42.127 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:42.127 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:42.127 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:42.127 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:42.127 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:42.127 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:42.127 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:42.127 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:42.127 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:42.127 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:42.127 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:42.127 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:42.127 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:42.127 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:42.127 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:42.127 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:42.127 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:42.127 Cannot find device "nvmf_init_br" 00:14:42.127 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:14:42.127 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:42.127 Cannot find device "nvmf_init_br2" 00:14:42.127 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:14:42.127 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:42.127 Cannot find device "nvmf_tgt_br" 00:14:42.127 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 00:14:42.127 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:42.127 Cannot find device "nvmf_tgt_br2" 00:14:42.127 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 00:14:42.127 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:42.387 Cannot find device "nvmf_init_br" 00:14:42.387 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 00:14:42.387 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:42.387 Cannot find device "nvmf_init_br2" 00:14:42.387 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 00:14:42.387 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:42.387 Cannot find device "nvmf_tgt_br" 00:14:42.387 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 00:14:42.387 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:42.387 Cannot find device "nvmf_tgt_br2" 00:14:42.387 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 00:14:42.387 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:42.387 Cannot find device "nvmf_br" 00:14:42.387 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 00:14:42.387 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:42.387 Cannot find device "nvmf_init_if" 00:14:42.387 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true 00:14:42.387 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:42.387 Cannot find device "nvmf_init_if2" 00:14:42.387 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true 00:14:42.387 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:42.387 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:42.387 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true 00:14:42.387 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:42.387 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:42.387 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true 00:14:42.387 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:42.387 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:42.387 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:42.387 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:42.387 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:42.387 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:42.387 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:42.387 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:42.387 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:42.387 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:42.387 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:42.387 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:42.387 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:42.387 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:42.387 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:42.387 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:42.388 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:42.388 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:42.388 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:42.388 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:42.388 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:42.388 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:42.388 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:42.388 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:42.388 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:42.647 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:42.647 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:42.647 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:42.647 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:42.647 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:42.647 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:42.647 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:42.647 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:42.647 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:42.647 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:14:42.647 00:14:42.647 --- 10.0.0.3 ping statistics --- 00:14:42.647 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:42.647 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:14:42.647 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:42.647 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:42.647 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:14:42.647 00:14:42.647 --- 10.0.0.4 ping statistics --- 00:14:42.647 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:42.647 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:14:42.647 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:42.647 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:42.647 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:14:42.647 00:14:42.647 --- 10.0.0.1 ping statistics --- 00:14:42.647 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:42.647 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:14:42.647 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:42.647 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:42.647 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:14:42.647 00:14:42.647 --- 10.0.0.2 ping statistics --- 00:14:42.647 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:42.647 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:14:42.647 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:42.647 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@461 -- # return 0 00:14:42.647 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:42.647 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:42.647 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:42.647 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:42.647 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:42.647 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:42.647 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:42.647 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:14:42.648 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:14:42.648 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:42.648 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:14:42.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:42.648 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=74755 00:14:42.648 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:42.648 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:42.648 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 74755 00:14:42.648 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 74755 ']' 00:14:42.648 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:42.648 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:42.648 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:42.648 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:42.648 13:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:14:42.648 [2024-12-06 13:53:41.927278] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:14:42.648 [2024-12-06 13:53:41.927541] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:42.927 [2024-12-06 13:53:42.082380] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:42.927 [2024-12-06 13:53:42.140023] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:42.927 [2024-12-06 13:53:42.140381] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:42.927 [2024-12-06 13:53:42.140529] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:42.927 [2024-12-06 13:53:42.140545] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:42.927 [2024-12-06 13:53:42.140554] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:42.927 [2024-12-06 13:53:42.141778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:42.927 [2024-12-06 13:53:42.141906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:42.927 [2024-12-06 13:53:42.142056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:42.927 [2024-12-06 13:53:42.142191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:42.927 [2024-12-06 13:53:42.201018] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:43.863 13:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:43.863 13:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:14:43.863 13:53:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:43.863 [2024-12-06 13:53:43.151010] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:43.863 13:53:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:14:43.863 13:53:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:43.863 13:53:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:14:43.863 13:53:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:44.121 Malloc1 00:14:44.379 13:53:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:44.638 13:53:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:44.638 13:53:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:44.921 [2024-12-06 13:53:44.219135] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:44.921 13:53:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:14:45.179 13:53:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:14:45.179 13:53:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:14:45.179 13:53:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:14:45.179 13:53:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:45.179 13:53:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:45.179 13:53:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:45.179 13:53:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:45.179 13:53:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:14:45.179 13:53:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:45.179 13:53:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:45.179 13:53:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:45.179 13:53:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:45.179 13:53:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:14:45.179 13:53:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:14:45.179 13:53:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:14:45.179 13:53:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:45.179 13:53:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:45.179 13:53:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:45.179 13:53:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:14:45.179 13:53:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:14:45.179 13:53:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:14:45.179 13:53:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:14:45.179 13:53:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:14:45.437 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:14:45.437 fio-3.35 00:14:45.438 Starting 1 thread 00:14:47.985 00:14:47.985 test: (groupid=0, jobs=1): err= 0: pid=74838: Fri Dec 6 13:53:46 2024 00:14:47.985 read: IOPS=8369, BW=32.7MiB/s (34.3MB/s)(65.6MiB/2006msec) 00:14:47.985 slat (nsec): min=1729, max=314327, avg=2203.12, stdev=3236.95 00:14:47.985 clat (usec): min=2376, max=19115, avg=7986.40, stdev=1536.55 00:14:47.985 lat (usec): min=2413, max=19117, avg=7988.60, stdev=1536.41 00:14:47.985 clat percentiles (usec): 00:14:47.985 | 1.00th=[ 6063], 5.00th=[ 6456], 10.00th=[ 6652], 20.00th=[ 6915], 00:14:47.985 | 30.00th=[ 7111], 40.00th=[ 7373], 50.00th=[ 7570], 60.00th=[ 7832], 00:14:47.985 | 70.00th=[ 8094], 80.00th=[ 8717], 90.00th=[10159], 95.00th=[11207], 00:14:47.985 | 99.00th=[12780], 99.50th=[15401], 99.90th=[18220], 99.95th=[18744], 00:14:47.985 | 99.99th=[19006] 00:14:47.985 bw ( KiB/s): min=30536, max=36640, per=99.84%, avg=33424.00, stdev=2501.46, samples=4 00:14:47.985 iops : min= 7634, max= 9160, avg=8356.00, stdev=625.36, samples=4 00:14:47.985 write: IOPS=8364, BW=32.7MiB/s (34.3MB/s)(65.5MiB/2006msec); 0 zone resets 00:14:47.985 slat (nsec): min=1812, max=214860, avg=2321.72, stdev=2245.40 00:14:47.985 clat (usec): min=2245, max=18249, avg=7237.74, stdev=1352.41 00:14:47.985 lat (usec): min=2257, max=18251, avg=7240.06, stdev=1352.35 00:14:47.985 clat percentiles (usec): 00:14:47.985 | 1.00th=[ 5604], 5.00th=[ 5932], 10.00th=[ 6063], 20.00th=[ 6325], 00:14:47.985 | 30.00th=[ 6456], 40.00th=[ 6652], 50.00th=[ 6849], 60.00th=[ 7046], 00:14:47.985 | 70.00th=[ 7373], 80.00th=[ 7898], 90.00th=[ 9110], 95.00th=[10028], 00:14:47.985 | 99.00th=[11600], 99.50th=[13042], 99.90th=[16188], 99.95th=[17171], 00:14:47.985 | 99.99th=[18220] 00:14:47.985 bw ( KiB/s): min=30592, max=37512, per=100.00%, avg=33458.00, stdev=2921.32, samples=4 00:14:47.985 iops : min= 7648, max= 9378, avg=8364.50, stdev=730.33, samples=4 00:14:47.985 lat (msec) : 4=0.12%, 10=91.50%, 20=8.38% 00:14:47.985 cpu : usr=72.72%, sys=21.65%, ctx=5, majf=0, minf=7 00:14:47.985 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:14:47.985 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:47.986 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:47.986 issued rwts: total=16789,16779,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:47.986 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:47.986 00:14:47.986 Run status group 0 (all jobs): 00:14:47.986 READ: bw=32.7MiB/s (34.3MB/s), 32.7MiB/s-32.7MiB/s (34.3MB/s-34.3MB/s), io=65.6MiB (68.8MB), run=2006-2006msec 00:14:47.986 WRITE: bw=32.7MiB/s (34.3MB/s), 32.7MiB/s-32.7MiB/s (34.3MB/s-34.3MB/s), io=65.5MiB (68.7MB), run=2006-2006msec 00:14:47.986 13:53:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:14:47.986 13:53:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:14:47.986 13:53:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:47.986 13:53:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:47.986 13:53:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:47.986 13:53:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:47.986 13:53:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:14:47.986 13:53:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:47.986 13:53:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:47.986 13:53:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:47.986 13:53:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:14:47.986 13:53:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:47.986 13:53:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:14:47.986 13:53:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:14:47.986 13:53:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:47.986 13:53:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:47.986 13:53:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:14:47.986 13:53:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:47.986 13:53:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:14:47.986 13:53:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:14:47.986 13:53:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:14:47.986 13:53:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:14:47.986 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:14:47.986 fio-3.35 00:14:47.986 Starting 1 thread 00:14:50.519 00:14:50.519 test: (groupid=0, jobs=1): err= 0: pid=74887: Fri Dec 6 13:53:49 2024 00:14:50.519 read: IOPS=8603, BW=134MiB/s (141MB/s)(270MiB/2009msec) 00:14:50.519 slat (usec): min=2, max=149, avg= 3.53, stdev= 2.45 00:14:50.519 clat (usec): min=2002, max=17351, avg=8261.31, stdev=2539.27 00:14:50.519 lat (usec): min=2006, max=17354, avg=8264.84, stdev=2539.35 00:14:50.519 clat percentiles (usec): 00:14:50.519 | 1.00th=[ 4015], 5.00th=[ 4621], 10.00th=[ 5211], 20.00th=[ 5932], 00:14:50.519 | 30.00th=[ 6718], 40.00th=[ 7373], 50.00th=[ 8029], 60.00th=[ 8717], 00:14:50.519 | 70.00th=[ 9372], 80.00th=[10159], 90.00th=[11600], 95.00th=[13042], 00:14:50.519 | 99.00th=[15664], 99.50th=[16319], 99.90th=[16909], 99.95th=[17171], 00:14:50.519 | 99.99th=[17171] 00:14:50.519 bw ( KiB/s): min=63904, max=80800, per=51.90%, avg=71448.00, stdev=8569.88, samples=4 00:14:50.519 iops : min= 3994, max= 5050, avg=4465.50, stdev=535.62, samples=4 00:14:50.519 write: IOPS=5120, BW=80.0MiB/s (83.9MB/s)(146MiB/1825msec); 0 zone resets 00:14:50.519 slat (usec): min=31, max=322, avg=35.97, stdev= 8.64 00:14:50.519 clat (usec): min=4046, max=20785, avg=11489.21, stdev=2136.17 00:14:50.519 lat (usec): min=4094, max=20822, avg=11525.18, stdev=2136.41 00:14:50.519 clat percentiles (usec): 00:14:50.519 | 1.00th=[ 7570], 5.00th=[ 8455], 10.00th=[ 8979], 20.00th=[ 9634], 00:14:50.519 | 30.00th=[10159], 40.00th=[10683], 50.00th=[11338], 60.00th=[11863], 00:14:50.519 | 70.00th=[12518], 80.00th=[13304], 90.00th=[14484], 95.00th=[15139], 00:14:50.519 | 99.00th=[16909], 99.50th=[17695], 99.90th=[19530], 99.95th=[20317], 00:14:50.519 | 99.99th=[20841] 00:14:50.519 bw ( KiB/s): min=66816, max=83008, per=90.39%, avg=74048.00, stdev=8342.45, samples=4 00:14:50.519 iops : min= 4176, max= 5188, avg=4628.00, stdev=521.40, samples=4 00:14:50.519 lat (msec) : 4=0.63%, 10=59.59%, 20=39.75%, 50=0.03% 00:14:50.519 cpu : usr=84.41%, sys=11.90%, ctx=3, majf=0, minf=14 00:14:50.519 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:14:50.519 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:50.519 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:50.519 issued rwts: total=17285,9344,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:50.519 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:50.519 00:14:50.519 Run status group 0 (all jobs): 00:14:50.519 READ: bw=134MiB/s (141MB/s), 134MiB/s-134MiB/s (141MB/s-141MB/s), io=270MiB (283MB), run=2009-2009msec 00:14:50.519 WRITE: bw=80.0MiB/s (83.9MB/s), 80.0MiB/s-80.0MiB/s (83.9MB/s-83.9MB/s), io=146MiB (153MB), run=1825-1825msec 00:14:50.519 13:53:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:50.519 13:53:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:14:50.519 13:53:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:14:50.519 13:53:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:14:50.519 13:53:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:14:50.519 13:53:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:50.519 13:53:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:14:50.519 13:53:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:50.519 13:53:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:14:50.519 13:53:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:50.519 13:53:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:50.519 rmmod nvme_tcp 00:14:50.519 rmmod nvme_fabrics 00:14:50.519 rmmod nvme_keyring 00:14:50.519 13:53:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:50.519 13:53:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:14:50.519 13:53:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:14:50.519 13:53:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 74755 ']' 00:14:50.519 13:53:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 74755 00:14:50.519 13:53:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 74755 ']' 00:14:50.519 13:53:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 74755 00:14:50.519 13:53:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:14:50.519 13:53:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:50.519 13:53:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74755 00:14:50.519 killing process with pid 74755 00:14:50.519 13:53:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:50.519 13:53:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:50.519 13:53:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74755' 00:14:50.519 13:53:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 74755 00:14:50.519 13:53:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 74755 00:14:50.779 13:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:50.779 13:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:50.779 13:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:50.779 13:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:14:50.779 13:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:14:50.779 13:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:50.779 13:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:14:50.779 13:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:50.779 13:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:50.779 13:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:50.779 13:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:50.779 13:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:50.779 13:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:51.038 13:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:51.038 13:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:51.038 13:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:51.038 13:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:51.038 13:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:51.038 13:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:51.038 13:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:51.038 13:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:51.038 13:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:51.038 13:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:51.038 13:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:51.038 13:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:51.038 13:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:51.038 13:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0 00:14:51.038 ************************************ 00:14:51.038 END TEST nvmf_fio_host 00:14:51.038 ************************************ 00:14:51.038 00:14:51.038 real 0m9.133s 00:14:51.038 user 0m36.254s 00:14:51.038 sys 0m2.438s 00:14:51.038 13:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:51.038 13:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:14:51.038 13:53:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:14:51.038 13:53:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:51.038 13:53:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:51.038 13:53:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:14:51.038 ************************************ 00:14:51.038 START TEST nvmf_failover 00:14:51.039 ************************************ 00:14:51.039 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:14:51.299 * Looking for test storage... 00:14:51.299 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:51.299 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:51.299 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:14:51.299 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:51.299 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:51.299 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:51.299 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:51.299 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:51.299 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:14:51.299 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:14:51.299 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:14:51.299 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:14:51.299 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:14:51.299 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:14:51.299 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:14:51.299 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:51.299 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:14:51.299 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:14:51.299 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:51.299 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:51.299 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:14:51.299 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:14:51.299 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:51.299 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:14:51.299 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:14:51.299 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:14:51.299 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:14:51.299 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:51.299 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:14:51.299 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:14:51.299 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:51.299 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:51.299 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:14:51.299 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:51.299 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:51.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:51.299 --rc genhtml_branch_coverage=1 00:14:51.299 --rc genhtml_function_coverage=1 00:14:51.299 --rc genhtml_legend=1 00:14:51.299 --rc geninfo_all_blocks=1 00:14:51.299 --rc geninfo_unexecuted_blocks=1 00:14:51.299 00:14:51.299 ' 00:14:51.299 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:51.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:51.299 --rc genhtml_branch_coverage=1 00:14:51.299 --rc genhtml_function_coverage=1 00:14:51.299 --rc genhtml_legend=1 00:14:51.299 --rc geninfo_all_blocks=1 00:14:51.299 --rc geninfo_unexecuted_blocks=1 00:14:51.299 00:14:51.299 ' 00:14:51.299 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:51.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:51.299 --rc genhtml_branch_coverage=1 00:14:51.299 --rc genhtml_function_coverage=1 00:14:51.299 --rc genhtml_legend=1 00:14:51.299 --rc geninfo_all_blocks=1 00:14:51.299 --rc geninfo_unexecuted_blocks=1 00:14:51.299 00:14:51.299 ' 00:14:51.299 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:51.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:51.299 --rc genhtml_branch_coverage=1 00:14:51.299 --rc genhtml_function_coverage=1 00:14:51.299 --rc genhtml_legend=1 00:14:51.299 --rc geninfo_all_blocks=1 00:14:51.299 --rc geninfo_unexecuted_blocks=1 00:14:51.299 00:14:51.299 ' 00:14:51.299 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:51.299 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:14:51.299 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:51.299 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:51.299 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:51.299 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:51.299 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:51.299 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:51.299 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:51.299 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:51.299 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:51.299 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:51.299 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:14:51.299 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=cfa2def7-c8af-457f-82a0-b312efdea7f4 00:14:51.299 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:51.299 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:51.299 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:51.299 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:51.299 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:51.299 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:14:51.299 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:51.300 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:51.300 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:51.300 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.300 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.300 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.300 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:14:51.300 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.300 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:14:51.300 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:51.300 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:51.300 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:51.300 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:51.300 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:51.300 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:51.300 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:51.300 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:51.300 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:51.300 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:51.300 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:51.300 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:51.300 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:51.300 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:51.300 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:14:51.300 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:51.300 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:51.300 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:51.300 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:51.300 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:51.300 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:51.300 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:51.300 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:51.300 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:51.300 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:51.300 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:51.300 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:51.300 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:51.300 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:51.300 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:51.300 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:51.300 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:51.300 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:51.300 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:51.300 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:51.300 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:51.300 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:51.300 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:51.300 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:51.300 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:51.300 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:51.300 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:51.300 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:51.300 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:51.300 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:51.300 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:51.300 Cannot find device "nvmf_init_br" 00:14:51.300 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:14:51.300 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:51.300 Cannot find device "nvmf_init_br2" 00:14:51.300 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:14:51.300 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:51.300 Cannot find device "nvmf_tgt_br" 00:14:51.300 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 00:14:51.300 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:51.300 Cannot find device "nvmf_tgt_br2" 00:14:51.300 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 00:14:51.300 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:51.300 Cannot find device "nvmf_init_br" 00:14:51.300 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 00:14:51.300 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:51.559 Cannot find device "nvmf_init_br2" 00:14:51.559 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 00:14:51.559 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:51.559 Cannot find device "nvmf_tgt_br" 00:14:51.559 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 00:14:51.559 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:51.559 Cannot find device "nvmf_tgt_br2" 00:14:51.559 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 00:14:51.559 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:51.559 Cannot find device "nvmf_br" 00:14:51.559 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 00:14:51.559 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:51.560 Cannot find device "nvmf_init_if" 00:14:51.560 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true 00:14:51.560 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:51.560 Cannot find device "nvmf_init_if2" 00:14:51.560 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true 00:14:51.560 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:51.560 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:51.560 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true 00:14:51.560 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:51.560 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:51.560 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true 00:14:51.560 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:51.560 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:51.560 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:51.560 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:51.560 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:51.560 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:51.560 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:51.560 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:51.560 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:51.560 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:51.560 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:51.560 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:51.560 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:51.560 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:51.560 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:51.560 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:51.560 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:51.560 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:51.560 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:51.560 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:51.560 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:51.560 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:51.560 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:51.560 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:51.560 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:51.819 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:51.819 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:51.819 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:51.819 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:51.819 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:51.819 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:51.819 13:53:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:51.819 13:53:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:51.819 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:51.819 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:14:51.819 00:14:51.819 --- 10.0.0.3 ping statistics --- 00:14:51.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:51.819 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:14:51.819 13:53:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:51.819 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:51.819 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.039 ms 00:14:51.819 00:14:51.819 --- 10.0.0.4 ping statistics --- 00:14:51.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:51.819 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:14:51.819 13:53:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:51.819 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:51.819 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:14:51.819 00:14:51.819 --- 10.0.0.1 ping statistics --- 00:14:51.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:51.819 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:14:51.819 13:53:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:51.819 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:51.819 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:14:51.819 00:14:51.819 --- 10.0.0.2 ping statistics --- 00:14:51.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:51.819 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:14:51.819 13:53:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:51.819 13:53:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@461 -- # return 0 00:14:51.819 13:53:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:51.819 13:53:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:51.819 13:53:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:51.819 13:53:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:51.819 13:53:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:51.819 13:53:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:51.819 13:53:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:51.819 13:53:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:14:51.819 13:53:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:51.819 13:53:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:51.819 13:53:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:14:51.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:51.820 13:53:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=75156 00:14:51.820 13:53:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 75156 00:14:51.820 13:53:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 75156 ']' 00:14:51.820 13:53:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:51.820 13:53:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:51.820 13:53:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:51.820 13:53:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:51.820 13:53:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:51.820 13:53:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:14:51.820 [2024-12-06 13:53:51.104130] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:14:51.820 [2024-12-06 13:53:51.104244] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:52.079 [2024-12-06 13:53:51.254318] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:52.079 [2024-12-06 13:53:51.314381] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:52.079 [2024-12-06 13:53:51.314836] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:52.079 [2024-12-06 13:53:51.315167] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:52.079 [2024-12-06 13:53:51.315413] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:52.079 [2024-12-06 13:53:51.315696] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:52.079 [2024-12-06 13:53:51.317114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:52.079 [2024-12-06 13:53:51.317234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:52.079 [2024-12-06 13:53:51.317465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:52.079 [2024-12-06 13:53:51.374210] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:52.079 13:53:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:52.079 13:53:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:14:52.079 13:53:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:52.079 13:53:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:52.079 13:53:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:14:52.337 13:53:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:52.337 13:53:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:52.596 [2024-12-06 13:53:51.803428] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:52.596 13:53:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:14:52.855 Malloc0 00:14:52.855 13:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:53.114 13:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:53.373 13:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:53.632 [2024-12-06 13:53:52.965745] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:53.632 13:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:14:53.891 [2024-12-06 13:53:53.249993] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:14:53.891 13:53:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:14:54.149 [2024-12-06 13:53:53.510246] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:14:54.149 13:53:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=75213 00:14:54.149 13:53:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:14:54.149 13:53:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:54.149 13:53:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 75213 /var/tmp/bdevperf.sock 00:14:54.149 13:53:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 75213 ']' 00:14:54.149 13:53:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:54.149 13:53:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:54.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:54.149 13:53:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:54.149 13:53:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:54.149 13:53:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:14:54.717 13:53:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:54.717 13:53:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:14:54.717 13:53:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:14:54.976 NVMe0n1 00:14:54.976 13:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:14:55.250 00:14:55.250 13:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=75228 00:14:55.250 13:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:55.250 13:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:14:56.628 13:53:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:56.628 13:53:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:14:59.919 13:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:14:59.919 00:14:59.919 13:53:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:15:00.178 13:53:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:15:03.534 13:54:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:03.534 [2024-12-06 13:54:02.785176] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:03.534 13:54:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:15:04.474 13:54:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:15:04.733 13:54:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 75228 00:15:11.308 { 00:15:11.308 "results": [ 00:15:11.308 { 00:15:11.308 "job": "NVMe0n1", 00:15:11.308 "core_mask": "0x1", 00:15:11.308 "workload": "verify", 00:15:11.308 "status": "finished", 00:15:11.308 "verify_range": { 00:15:11.308 "start": 0, 00:15:11.308 "length": 16384 00:15:11.308 }, 00:15:11.308 "queue_depth": 128, 00:15:11.308 "io_size": 4096, 00:15:11.308 "runtime": 15.008675, 00:15:11.308 "iops": 9865.16131503947, 00:15:11.308 "mibps": 38.53578638687293, 00:15:11.308 "io_failed": 3461, 00:15:11.308 "io_timeout": 0, 00:15:11.308 "avg_latency_us": 12649.681724251303, 00:15:11.308 "min_latency_us": 618.1236363636364, 00:15:11.308 "max_latency_us": 17754.298181818183 00:15:11.308 } 00:15:11.308 ], 00:15:11.308 "core_count": 1 00:15:11.308 } 00:15:11.308 13:54:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 75213 00:15:11.308 13:54:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 75213 ']' 00:15:11.308 13:54:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 75213 00:15:11.308 13:54:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:15:11.308 13:54:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:11.308 13:54:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75213 00:15:11.309 killing process with pid 75213 00:15:11.309 13:54:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:11.309 13:54:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:11.309 13:54:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75213' 00:15:11.309 13:54:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 75213 00:15:11.309 13:54:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 75213 00:15:11.309 13:54:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:11.309 [2024-12-06 13:53:53.580532] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:15:11.309 [2024-12-06 13:53:53.580635] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75213 ] 00:15:11.309 [2024-12-06 13:53:53.722119] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:11.309 [2024-12-06 13:53:53.781717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:11.309 [2024-12-06 13:53:53.835575] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:11.309 Running I/O for 15 seconds... 00:15:11.309 10176.00 IOPS, 39.75 MiB/s [2024-12-06T13:54:10.713Z] [2024-12-06 13:53:55.867566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:92672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.309 [2024-12-06 13:53:55.867632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.309 [2024-12-06 13:53:55.867677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:92680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.309 [2024-12-06 13:53:55.867693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.309 [2024-12-06 13:53:55.867709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:92688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.309 [2024-12-06 13:53:55.867723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.309 [2024-12-06 13:53:55.867753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:92696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.309 [2024-12-06 13:53:55.867782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.309 [2024-12-06 13:53:55.867797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:92704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.309 [2024-12-06 13:53:55.867811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.309 [2024-12-06 13:53:55.867825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:92712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.309 [2024-12-06 13:53:55.867838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.309 [2024-12-06 13:53:55.867853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:92720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.309 [2024-12-06 13:53:55.867877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.309 [2024-12-06 13:53:55.867891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:92728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.309 [2024-12-06 13:53:55.867904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.309 [2024-12-06 13:53:55.867919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:92160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.309 [2024-12-06 13:53:55.867933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.309 [2024-12-06 13:53:55.867947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:92168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.309 [2024-12-06 13:53:55.867961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.309 [2024-12-06 13:53:55.867975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:92176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.309 [2024-12-06 13:53:55.868030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.309 [2024-12-06 13:53:55.868047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:92184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.309 [2024-12-06 13:53:55.868061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.309 [2024-12-06 13:53:55.868101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:92192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.309 [2024-12-06 13:53:55.868116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.309 [2024-12-06 13:53:55.868134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:92200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.309 [2024-12-06 13:53:55.868149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.309 [2024-12-06 13:53:55.868165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:92208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.309 [2024-12-06 13:53:55.868194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.309 [2024-12-06 13:53:55.868212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:92216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.309 [2024-12-06 13:53:55.868227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.309 [2024-12-06 13:53:55.868243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:92224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.309 [2024-12-06 13:53:55.868257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.309 [2024-12-06 13:53:55.868273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:92232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.309 [2024-12-06 13:53:55.868287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.309 [2024-12-06 13:53:55.868302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:92240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.309 [2024-12-06 13:53:55.868316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.309 [2024-12-06 13:53:55.868332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:92248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.309 [2024-12-06 13:53:55.868346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.309 [2024-12-06 13:53:55.868361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:92256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.309 [2024-12-06 13:53:55.868375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.309 [2024-12-06 13:53:55.868391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:92264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.309 [2024-12-06 13:53:55.868404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.309 [2024-12-06 13:53:55.868420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:92272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.309 [2024-12-06 13:53:55.868434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.309 [2024-12-06 13:53:55.868485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:92280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.309 [2024-12-06 13:53:55.868515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.309 [2024-12-06 13:53:55.868529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:92736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.310 [2024-12-06 13:53:55.868543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.310 [2024-12-06 13:53:55.868557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:92744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.310 [2024-12-06 13:53:55.868571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.310 [2024-12-06 13:53:55.868585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:92752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.310 [2024-12-06 13:53:55.868599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.310 [2024-12-06 13:53:55.868613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:92760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.310 [2024-12-06 13:53:55.868626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.310 [2024-12-06 13:53:55.868646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:92768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.310 [2024-12-06 13:53:55.868660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.310 [2024-12-06 13:53:55.868674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:92776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.310 [2024-12-06 13:53:55.868688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.310 [2024-12-06 13:53:55.868703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:92784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.310 [2024-12-06 13:53:55.868717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.310 [2024-12-06 13:53:55.868731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:92792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.310 [2024-12-06 13:53:55.868745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.310 [2024-12-06 13:53:55.868775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:92288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.310 [2024-12-06 13:53:55.868788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.310 [2024-12-06 13:53:55.868802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:92296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.310 [2024-12-06 13:53:55.868815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.310 [2024-12-06 13:53:55.868829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:92304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.310 [2024-12-06 13:53:55.868842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.310 [2024-12-06 13:53:55.868856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:92312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.310 [2024-12-06 13:53:55.868879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.310 [2024-12-06 13:53:55.868911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:92320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.310 [2024-12-06 13:53:55.868925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.310 [2024-12-06 13:53:55.868940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:92328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.310 [2024-12-06 13:53:55.868953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.310 [2024-12-06 13:53:55.868968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:92336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.310 [2024-12-06 13:53:55.868981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.310 [2024-12-06 13:53:55.868995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:92344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.310 [2024-12-06 13:53:55.869008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.310 [2024-12-06 13:53:55.869023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:92352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.310 [2024-12-06 13:53:55.869036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.310 [2024-12-06 13:53:55.869050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:92360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.310 [2024-12-06 13:53:55.869063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.310 [2024-12-06 13:53:55.869078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:92368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.310 [2024-12-06 13:53:55.869091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.310 [2024-12-06 13:53:55.869106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:92376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.310 [2024-12-06 13:53:55.869119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.310 [2024-12-06 13:53:55.869138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:92384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.310 [2024-12-06 13:53:55.869154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.310 [2024-12-06 13:53:55.869180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:92392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.310 [2024-12-06 13:53:55.869194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.310 [2024-12-06 13:53:55.869209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:92400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.310 [2024-12-06 13:53:55.869222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.310 [2024-12-06 13:53:55.869237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:92408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.310 [2024-12-06 13:53:55.869251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.310 [2024-12-06 13:53:55.869266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:92416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.310 [2024-12-06 13:53:55.869286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.310 [2024-12-06 13:53:55.869301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:92424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.310 [2024-12-06 13:53:55.869315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.310 [2024-12-06 13:53:55.869330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:92432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.310 [2024-12-06 13:53:55.869343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.310 [2024-12-06 13:53:55.869358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:92440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.310 [2024-12-06 13:53:55.869371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.310 [2024-12-06 13:53:55.869385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:92448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.310 [2024-12-06 13:53:55.869399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.310 [2024-12-06 13:53:55.869414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:92456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.310 [2024-12-06 13:53:55.869427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.311 [2024-12-06 13:53:55.869442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.311 [2024-12-06 13:53:55.869455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.311 [2024-12-06 13:53:55.869470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:92472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.311 [2024-12-06 13:53:55.869483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.311 [2024-12-06 13:53:55.869497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:92800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.311 [2024-12-06 13:53:55.869511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.311 [2024-12-06 13:53:55.869525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:92808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.311 [2024-12-06 13:53:55.869538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.311 [2024-12-06 13:53:55.869553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:92816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.311 [2024-12-06 13:53:55.869566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.311 [2024-12-06 13:53:55.869581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:92824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.311 [2024-12-06 13:53:55.869595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.311 [2024-12-06 13:53:55.869615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:92832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.311 [2024-12-06 13:53:55.869628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.311 [2024-12-06 13:53:55.869649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:92840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.311 [2024-12-06 13:53:55.869663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.311 [2024-12-06 13:53:55.869677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:92848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.311 [2024-12-06 13:53:55.869690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.311 [2024-12-06 13:53:55.869705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:92856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.311 [2024-12-06 13:53:55.869719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.311 [2024-12-06 13:53:55.869734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:92864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.311 [2024-12-06 13:53:55.869747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.311 [2024-12-06 13:53:55.869761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:92872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.311 [2024-12-06 13:53:55.869775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.311 [2024-12-06 13:53:55.869790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:92880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.311 [2024-12-06 13:53:55.869802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.311 [2024-12-06 13:53:55.869817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:92888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.311 [2024-12-06 13:53:55.869830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.311 [2024-12-06 13:53:55.869845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:92896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.311 [2024-12-06 13:53:55.869858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.311 [2024-12-06 13:53:55.869873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:92904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.311 [2024-12-06 13:53:55.869886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.311 [2024-12-06 13:53:55.869901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:92912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.311 [2024-12-06 13:53:55.869922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.311 [2024-12-06 13:53:55.869937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:92920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.311 [2024-12-06 13:53:55.869951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.311 [2024-12-06 13:53:55.869965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:92480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.311 [2024-12-06 13:53:55.869978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.311 [2024-12-06 13:53:55.869993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:92488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.311 [2024-12-06 13:53:55.870012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.311 [2024-12-06 13:53:55.870028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:92496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.311 [2024-12-06 13:53:55.870041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.311 [2024-12-06 13:53:55.870055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:92504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.311 [2024-12-06 13:53:55.870069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.311 [2024-12-06 13:53:55.870088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:92512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.311 [2024-12-06 13:53:55.870124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.311 [2024-12-06 13:53:55.870140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:92520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.311 [2024-12-06 13:53:55.870153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.311 [2024-12-06 13:53:55.870167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:92528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.311 [2024-12-06 13:53:55.870181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.311 [2024-12-06 13:53:55.870196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:92536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.311 [2024-12-06 13:53:55.870209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.311 [2024-12-06 13:53:55.870224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:92544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.311 [2024-12-06 13:53:55.870237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.311 [2024-12-06 13:53:55.870252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:92552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.311 [2024-12-06 13:53:55.870265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.311 [2024-12-06 13:53:55.870279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:92560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.311 [2024-12-06 13:53:55.870292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.311 [2024-12-06 13:53:55.870307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:92568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.311 [2024-12-06 13:53:55.870319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.312 [2024-12-06 13:53:55.870334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:92576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.312 [2024-12-06 13:53:55.870348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.312 [2024-12-06 13:53:55.870362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:92584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.312 [2024-12-06 13:53:55.870375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.312 [2024-12-06 13:53:55.870396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:92592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.312 [2024-12-06 13:53:55.870416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.312 [2024-12-06 13:53:55.870431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:92600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.312 [2024-12-06 13:53:55.870444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.312 [2024-12-06 13:53:55.870459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:92928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.312 [2024-12-06 13:53:55.870472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.312 [2024-12-06 13:53:55.870487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:92936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.312 [2024-12-06 13:53:55.870511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.312 [2024-12-06 13:53:55.870526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:92944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.312 [2024-12-06 13:53:55.870539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.312 [2024-12-06 13:53:55.870553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:92952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.312 [2024-12-06 13:53:55.870566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.312 [2024-12-06 13:53:55.870586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:92960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.312 [2024-12-06 13:53:55.870600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.312 [2024-12-06 13:53:55.870614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:92968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.312 [2024-12-06 13:53:55.870627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.312 [2024-12-06 13:53:55.870641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:92976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.312 [2024-12-06 13:53:55.870655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.312 [2024-12-06 13:53:55.870669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:92984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.312 [2024-12-06 13:53:55.870682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.312 [2024-12-06 13:53:55.870697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:92608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.312 [2024-12-06 13:53:55.870710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.312 [2024-12-06 13:53:55.870725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:92616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.312 [2024-12-06 13:53:55.870737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.312 [2024-12-06 13:53:55.870752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:92624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.312 [2024-12-06 13:53:55.870766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.312 [2024-12-06 13:53:55.870787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:92632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.312 [2024-12-06 13:53:55.870801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.312 [2024-12-06 13:53:55.870815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:92640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.312 [2024-12-06 13:53:55.870828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.312 [2024-12-06 13:53:55.870843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:92648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.312 [2024-12-06 13:53:55.870856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.312 [2024-12-06 13:53:55.870871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:92656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.312 [2024-12-06 13:53:55.870889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.312 [2024-12-06 13:53:55.870904] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd55ac0 is same with the state(6) to be set 00:15:11.312 [2024-12-06 13:53:55.870920] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:11.312 [2024-12-06 13:53:55.870930] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:11.312 [2024-12-06 13:53:55.870940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92664 len:8 PRP1 0x0 PRP2 0x0 00:15:11.312 [2024-12-06 13:53:55.870953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.312 [2024-12-06 13:53:55.870968] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:11.312 [2024-12-06 13:53:55.870977] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:11.312 [2024-12-06 13:53:55.870987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92992 len:8 PRP1 0x0 PRP2 0x0 00:15:11.312 [2024-12-06 13:53:55.871000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.312 [2024-12-06 13:53:55.871012] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:11.312 [2024-12-06 13:53:55.871028] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:11.312 [2024-12-06 13:53:55.871038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93000 len:8 PRP1 0x0 PRP2 0x0 00:15:11.312 [2024-12-06 13:53:55.871051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.312 [2024-12-06 13:53:55.871064] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:11.312 [2024-12-06 13:53:55.871074] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:11.312 [2024-12-06 13:53:55.871083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93008 len:8 PRP1 0x0 PRP2 0x0 00:15:11.312 [2024-12-06 13:53:55.871096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.312 [2024-12-06 13:53:55.871123] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:11.312 [2024-12-06 13:53:55.871133] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:11.312 [2024-12-06 13:53:55.871143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93016 len:8 PRP1 0x0 PRP2 0x0 00:15:11.312 [2024-12-06 13:53:55.871165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.312 [2024-12-06 13:53:55.871180] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:11.312 [2024-12-06 13:53:55.871189] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:11.312 [2024-12-06 13:53:55.871199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93024 len:8 PRP1 0x0 PRP2 0x0 00:15:11.312 [2024-12-06 13:53:55.871212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.312 [2024-12-06 13:53:55.871224] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:11.312 [2024-12-06 13:53:55.871234] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:11.312 [2024-12-06 13:53:55.871244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93032 len:8 PRP1 0x0 PRP2 0x0 00:15:11.313 [2024-12-06 13:53:55.871256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.313 [2024-12-06 13:53:55.871269] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:11.313 [2024-12-06 13:53:55.871278] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:11.313 [2024-12-06 13:53:55.871293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93040 len:8 PRP1 0x0 PRP2 0x0 00:15:11.313 [2024-12-06 13:53:55.871306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.313 [2024-12-06 13:53:55.871319] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:11.313 [2024-12-06 13:53:55.871329] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:11.313 [2024-12-06 13:53:55.871339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93048 len:8 PRP1 0x0 PRP2 0x0 00:15:11.313 [2024-12-06 13:53:55.871351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.313 [2024-12-06 13:53:55.871364] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:11.313 [2024-12-06 13:53:55.871374] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:11.313 [2024-12-06 13:53:55.871384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93056 len:8 PRP1 0x0 PRP2 0x0 00:15:11.313 [2024-12-06 13:53:55.871396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.313 [2024-12-06 13:53:55.871409] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:11.313 [2024-12-06 13:53:55.871449] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:11.313 [2024-12-06 13:53:55.871461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93064 len:8 PRP1 0x0 PRP2 0x0 00:15:11.313 [2024-12-06 13:53:55.871474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.313 [2024-12-06 13:53:55.871487] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:11.313 [2024-12-06 13:53:55.871497] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:11.313 [2024-12-06 13:53:55.871506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93072 len:8 PRP1 0x0 PRP2 0x0 00:15:11.313 [2024-12-06 13:53:55.871519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.313 [2024-12-06 13:53:55.871532] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:11.313 [2024-12-06 13:53:55.871541] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:11.313 [2024-12-06 13:53:55.871558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93080 len:8 PRP1 0x0 PRP2 0x0 00:15:11.313 [2024-12-06 13:53:55.871571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.313 [2024-12-06 13:53:55.871584] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:11.313 [2024-12-06 13:53:55.871593] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:11.313 [2024-12-06 13:53:55.871603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93088 len:8 PRP1 0x0 PRP2 0x0 00:15:11.313 [2024-12-06 13:53:55.871615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.313 [2024-12-06 13:53:55.871628] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:11.313 [2024-12-06 13:53:55.871637] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:11.313 [2024-12-06 13:53:55.871646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93096 len:8 PRP1 0x0 PRP2 0x0 00:15:11.313 [2024-12-06 13:53:55.871659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.313 [2024-12-06 13:53:55.871672] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:11.313 [2024-12-06 13:53:55.871681] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:11.313 [2024-12-06 13:53:55.871696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93104 len:8 PRP1 0x0 PRP2 0x0 00:15:11.313 [2024-12-06 13:53:55.871709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.313 [2024-12-06 13:53:55.871722] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:11.313 [2024-12-06 13:53:55.871732] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:11.313 [2024-12-06 13:53:55.871741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93112 len:8 PRP1 0x0 PRP2 0x0 00:15:11.313 [2024-12-06 13:53:55.871765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.313 [2024-12-06 13:53:55.871789] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:11.313 [2024-12-06 13:53:55.871809] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:11.313 [2024-12-06 13:53:55.871819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93120 len:8 PRP1 0x0 PRP2 0x0 00:15:11.313 [2024-12-06 13:53:55.871832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.313 [2024-12-06 13:53:55.871845] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:11.313 [2024-12-06 13:53:55.871858] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:11.313 [2024-12-06 13:53:55.871878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93128 len:8 PRP1 0x0 PRP2 0x0 00:15:11.313 [2024-12-06 13:53:55.871890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.313 [2024-12-06 13:53:55.871903] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:11.313 [2024-12-06 13:53:55.871913] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:11.313 [2024-12-06 13:53:55.871922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93136 len:8 PRP1 0x0 PRP2 0x0 00:15:11.313 [2024-12-06 13:53:55.871935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.313 [2024-12-06 13:53:55.871948] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:11.313 [2024-12-06 13:53:55.871963] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:11.313 [2024-12-06 13:53:55.871974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93144 len:8 PRP1 0x0 PRP2 0x0 00:15:11.313 [2024-12-06 13:53:55.871987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.313 [2024-12-06 13:53:55.871999] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:11.313 [2024-12-06 13:53:55.872008] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:11.313 [2024-12-06 13:53:55.872018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93152 len:8 PRP1 0x0 PRP2 0x0 00:15:11.313 [2024-12-06 13:53:55.872031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.313 [2024-12-06 13:53:55.872043] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:11.313 [2024-12-06 13:53:55.872053] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:11.313 [2024-12-06 13:53:55.872062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93160 len:8 PRP1 0x0 PRP2 0x0 00:15:11.313 [2024-12-06 13:53:55.872074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.313 [2024-12-06 13:53:55.872087] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:11.313 [2024-12-06 13:53:55.872096] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:11.313 [2024-12-06 13:53:55.872122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93168 len:8 PRP1 0x0 PRP2 0x0 00:15:11.313 [2024-12-06 13:53:55.872136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.313 [2024-12-06 13:53:55.872149] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:11.313 [2024-12-06 13:53:55.872158] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:11.313 [2024-12-06 13:53:55.872168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93176 len:8 PRP1 0x0 PRP2 0x0 00:15:11.313 [2024-12-06 13:53:55.872181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.314 [2024-12-06 13:53:55.872243] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:15:11.314 [2024-12-06 13:53:55.872300] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:11.314 [2024-12-06 13:53:55.872321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.314 [2024-12-06 13:53:55.872336] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:11.314 [2024-12-06 13:53:55.872349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.314 [2024-12-06 13:53:55.872368] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:11.314 [2024-12-06 13:53:55.872381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.314 [2024-12-06 13:53:55.872395] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:11.314 [2024-12-06 13:53:55.872408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.314 [2024-12-06 13:53:55.872421] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:15:11.314 [2024-12-06 13:53:55.872472] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce6c60 (9): Bad file descriptor 00:15:11.314 [2024-12-06 13:53:55.876261] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:15:11.314 [2024-12-06 13:53:55.905415] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:15:11.314 9643.50 IOPS, 37.67 MiB/s [2024-12-06T13:54:10.718Z] 9573.00 IOPS, 37.39 MiB/s [2024-12-06T13:54:10.718Z] 9547.75 IOPS, 37.30 MiB/s [2024-12-06T13:54:10.718Z] [2024-12-06 13:53:59.488843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:110968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.314 [2024-12-06 13:53:59.488909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.314 [2024-12-06 13:53:59.488951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:110976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.314 [2024-12-06 13:53:59.488965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.314 [2024-12-06 13:53:59.488979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:110984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.314 [2024-12-06 13:53:59.488993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.314 [2024-12-06 13:53:59.489007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:110992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.314 [2024-12-06 13:53:59.489019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.314 [2024-12-06 13:53:59.489032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:111000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.314 [2024-12-06 13:53:59.489044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.314 [2024-12-06 13:53:59.489058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:111008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.314 [2024-12-06 13:53:59.489070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.314 [2024-12-06 13:53:59.489084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:111016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.314 [2024-12-06 13:53:59.489095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.314 [2024-12-06 13:53:59.489131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:111024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.314 [2024-12-06 13:53:59.489146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.314 [2024-12-06 13:53:59.489160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:111032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.314 [2024-12-06 13:53:59.489172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.314 [2024-12-06 13:53:59.489185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:111040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.314 [2024-12-06 13:53:59.489197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.314 [2024-12-06 13:53:59.489214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:111048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.314 [2024-12-06 13:53:59.489226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.314 [2024-12-06 13:53:59.489261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:111056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.314 [2024-12-06 13:53:59.489274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.314 [2024-12-06 13:53:59.489288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:110392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.314 [2024-12-06 13:53:59.489300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.314 [2024-12-06 13:53:59.489313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:110400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.314 [2024-12-06 13:53:59.489325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.314 [2024-12-06 13:53:59.489339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:110408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.314 [2024-12-06 13:53:59.489351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.314 [2024-12-06 13:53:59.489364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:110416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.314 [2024-12-06 13:53:59.489376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.314 [2024-12-06 13:53:59.489390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:110424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.314 [2024-12-06 13:53:59.489402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.315 [2024-12-06 13:53:59.489418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:110432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.315 [2024-12-06 13:53:59.489430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.315 [2024-12-06 13:53:59.489443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:110440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.315 [2024-12-06 13:53:59.489455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.315 [2024-12-06 13:53:59.489469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:110448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.315 [2024-12-06 13:53:59.489481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.315 [2024-12-06 13:53:59.489494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:110456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.315 [2024-12-06 13:53:59.489516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.315 [2024-12-06 13:53:59.489529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:110464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.315 [2024-12-06 13:53:59.489541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.315 [2024-12-06 13:53:59.489555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:110472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.315 [2024-12-06 13:53:59.489567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.315 [2024-12-06 13:53:59.489580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:110480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.315 [2024-12-06 13:53:59.489600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.315 [2024-12-06 13:53:59.489614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:110488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.315 [2024-12-06 13:53:59.489641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.315 [2024-12-06 13:53:59.489655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:110496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.315 [2024-12-06 13:53:59.489667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.315 [2024-12-06 13:53:59.489681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:110504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.315 [2024-12-06 13:53:59.489693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.315 [2024-12-06 13:53:59.489707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:110512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.315 [2024-12-06 13:53:59.489719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.315 [2024-12-06 13:53:59.489733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:111064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.315 [2024-12-06 13:53:59.489745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.315 [2024-12-06 13:53:59.489759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:111072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.315 [2024-12-06 13:53:59.489771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.315 [2024-12-06 13:53:59.489784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:111080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.315 [2024-12-06 13:53:59.489797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.315 [2024-12-06 13:53:59.489811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:111088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.315 [2024-12-06 13:53:59.489824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.315 [2024-12-06 13:53:59.489838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:111096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.315 [2024-12-06 13:53:59.489850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.315 [2024-12-06 13:53:59.489864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:111104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.315 [2024-12-06 13:53:59.489877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.315 [2024-12-06 13:53:59.489891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:111112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.315 [2024-12-06 13:53:59.489903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.315 [2024-12-06 13:53:59.489916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:111120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.315 [2024-12-06 13:53:59.489929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.315 [2024-12-06 13:53:59.489949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:111128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.315 [2024-12-06 13:53:59.489962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.315 [2024-12-06 13:53:59.489976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:111136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.315 [2024-12-06 13:53:59.489988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.315 [2024-12-06 13:53:59.490001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:111144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.315 [2024-12-06 13:53:59.490014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.315 [2024-12-06 13:53:59.490043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:111152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.315 [2024-12-06 13:53:59.490054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.315 [2024-12-06 13:53:59.490068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:111160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.315 [2024-12-06 13:53:59.490080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.315 [2024-12-06 13:53:59.490094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:111168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.315 [2024-12-06 13:53:59.490106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.315 [2024-12-06 13:53:59.490120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:110520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.315 [2024-12-06 13:53:59.490142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.315 [2024-12-06 13:53:59.490168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:110528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.315 [2024-12-06 13:53:59.490180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.315 [2024-12-06 13:53:59.490194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:110536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.315 [2024-12-06 13:53:59.490206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.315 [2024-12-06 13:53:59.490220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:110544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.315 [2024-12-06 13:53:59.490231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.315 [2024-12-06 13:53:59.490244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:110552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.315 [2024-12-06 13:53:59.490257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.315 [2024-12-06 13:53:59.490270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:110560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.315 [2024-12-06 13:53:59.490282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.316 [2024-12-06 13:53:59.490314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:110568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.316 [2024-12-06 13:53:59.490333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.316 [2024-12-06 13:53:59.490348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:110576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.316 [2024-12-06 13:53:59.490361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.316 [2024-12-06 13:53:59.490375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:111176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.316 [2024-12-06 13:53:59.490387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.316 [2024-12-06 13:53:59.490401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:111184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.316 [2024-12-06 13:53:59.490413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.316 [2024-12-06 13:53:59.490426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:111192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.316 [2024-12-06 13:53:59.490439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.316 [2024-12-06 13:53:59.490453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:111200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.316 [2024-12-06 13:53:59.490465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.316 [2024-12-06 13:53:59.490478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:111208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.316 [2024-12-06 13:53:59.490491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.316 [2024-12-06 13:53:59.490504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:111216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.316 [2024-12-06 13:53:59.490517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.316 [2024-12-06 13:53:59.490542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:111224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.316 [2024-12-06 13:53:59.490554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.316 [2024-12-06 13:53:59.490568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:111232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.316 [2024-12-06 13:53:59.490580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.316 [2024-12-06 13:53:59.490594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:111240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.316 [2024-12-06 13:53:59.490606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.316 [2024-12-06 13:53:59.490620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:111248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.316 [2024-12-06 13:53:59.490632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.316 [2024-12-06 13:53:59.490646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:111256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.316 [2024-12-06 13:53:59.490659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.316 [2024-12-06 13:53:59.490688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:111264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.316 [2024-12-06 13:53:59.490707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.316 [2024-12-06 13:53:59.490722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:111272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.316 [2024-12-06 13:53:59.490735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.316 [2024-12-06 13:53:59.490749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:111280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.316 [2024-12-06 13:53:59.490761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.316 [2024-12-06 13:53:59.490776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:110584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.316 [2024-12-06 13:53:59.490789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.316 [2024-12-06 13:53:59.490803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:110592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.316 [2024-12-06 13:53:59.490815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.316 [2024-12-06 13:53:59.490829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:110600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.316 [2024-12-06 13:53:59.490842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.316 [2024-12-06 13:53:59.490856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:110608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.316 [2024-12-06 13:53:59.490869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.316 [2024-12-06 13:53:59.490883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:110616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.316 [2024-12-06 13:53:59.490895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.316 [2024-12-06 13:53:59.490909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:110624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.316 [2024-12-06 13:53:59.490922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.316 [2024-12-06 13:53:59.490936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:110632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.316 [2024-12-06 13:53:59.490948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.316 [2024-12-06 13:53:59.490962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:110640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.316 [2024-12-06 13:53:59.490975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.316 [2024-12-06 13:53:59.490989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:110648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.316 [2024-12-06 13:53:59.491001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.316 [2024-12-06 13:53:59.491015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:110656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.316 [2024-12-06 13:53:59.491028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.316 [2024-12-06 13:53:59.491063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:110664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.316 [2024-12-06 13:53:59.491076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.316 [2024-12-06 13:53:59.491090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:110672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.316 [2024-12-06 13:53:59.491102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.316 [2024-12-06 13:53:59.491117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:110680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.316 [2024-12-06 13:53:59.491129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.316 [2024-12-06 13:53:59.491153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:110688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.316 [2024-12-06 13:53:59.491169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.317 [2024-12-06 13:53:59.491183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:110696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.317 [2024-12-06 13:53:59.491197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.317 [2024-12-06 13:53:59.491211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:110704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.317 [2024-12-06 13:53:59.491223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.317 [2024-12-06 13:53:59.491237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:110712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.317 [2024-12-06 13:53:59.491249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.317 [2024-12-06 13:53:59.491263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:110720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.317 [2024-12-06 13:53:59.491275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.317 [2024-12-06 13:53:59.491289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:110728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.317 [2024-12-06 13:53:59.491302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.317 [2024-12-06 13:53:59.491316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:110736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.317 [2024-12-06 13:53:59.491328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.317 [2024-12-06 13:53:59.491342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:110744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.317 [2024-12-06 13:53:59.491354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.317 [2024-12-06 13:53:59.491368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:110752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.317 [2024-12-06 13:53:59.491380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.317 [2024-12-06 13:53:59.491394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:110760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.317 [2024-12-06 13:53:59.491423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.317 [2024-12-06 13:53:59.491469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:110768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.317 [2024-12-06 13:53:59.491482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.317 [2024-12-06 13:53:59.491496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:111288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.317 [2024-12-06 13:53:59.491509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.317 [2024-12-06 13:53:59.491523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:111296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.317 [2024-12-06 13:53:59.491535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.317 [2024-12-06 13:53:59.491549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:111304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.317 [2024-12-06 13:53:59.491562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.317 [2024-12-06 13:53:59.491576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:111312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.317 [2024-12-06 13:53:59.491589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.317 [2024-12-06 13:53:59.491612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:111320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.317 [2024-12-06 13:53:59.491625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.317 [2024-12-06 13:53:59.491639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.317 [2024-12-06 13:53:59.491652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.317 [2024-12-06 13:53:59.491666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:111336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.317 [2024-12-06 13:53:59.491678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.317 [2024-12-06 13:53:59.491692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:111344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.317 [2024-12-06 13:53:59.491705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.317 [2024-12-06 13:53:59.491719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:110776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.317 [2024-12-06 13:53:59.491732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.317 [2024-12-06 13:53:59.491747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:110784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.317 [2024-12-06 13:53:59.491774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.317 [2024-12-06 13:53:59.491788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:110792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.317 [2024-12-06 13:53:59.491800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.317 [2024-12-06 13:53:59.491821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:110800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.317 [2024-12-06 13:53:59.491845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.317 [2024-12-06 13:53:59.491859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:110808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.317 [2024-12-06 13:53:59.491871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.317 [2024-12-06 13:53:59.491885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:110816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.317 [2024-12-06 13:53:59.491898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.317 [2024-12-06 13:53:59.491912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:110824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.317 [2024-12-06 13:53:59.491924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.317 [2024-12-06 13:53:59.491938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:110832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.317 [2024-12-06 13:53:59.491950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.317 [2024-12-06 13:53:59.491964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:110840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.317 [2024-12-06 13:53:59.491976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.317 [2024-12-06 13:53:59.491990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:110848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.317 [2024-12-06 13:53:59.492002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.317 [2024-12-06 13:53:59.492016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:110856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.317 [2024-12-06 13:53:59.492028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.317 [2024-12-06 13:53:59.492042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:110864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.317 [2024-12-06 13:53:59.492054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.317 [2024-12-06 13:53:59.492073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:110872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.317 [2024-12-06 13:53:59.492085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.318 [2024-12-06 13:53:59.492099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:110880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.318 [2024-12-06 13:53:59.492111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.318 [2024-12-06 13:53:59.492125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:110888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.318 [2024-12-06 13:53:59.492138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.318 [2024-12-06 13:53:59.492176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:110896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.318 [2024-12-06 13:53:59.492196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.318 [2024-12-06 13:53:59.492211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:111352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.318 [2024-12-06 13:53:59.492224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.318 [2024-12-06 13:53:59.492238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:111360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.318 [2024-12-06 13:53:59.492250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.318 [2024-12-06 13:53:59.492264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:111368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.318 [2024-12-06 13:53:59.492276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.318 [2024-12-06 13:53:59.492290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:111376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.318 [2024-12-06 13:53:59.492302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.318 [2024-12-06 13:53:59.492316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:111384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.318 [2024-12-06 13:53:59.492328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.318 [2024-12-06 13:53:59.492342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:111392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.318 [2024-12-06 13:53:59.492354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.318 [2024-12-06 13:53:59.492368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:111400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.318 [2024-12-06 13:53:59.492389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.318 [2024-12-06 13:53:59.492403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:111408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.318 [2024-12-06 13:53:59.492415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.318 [2024-12-06 13:53:59.492429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:110904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.318 [2024-12-06 13:53:59.492441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.318 [2024-12-06 13:53:59.492454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:110912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.318 [2024-12-06 13:53:59.492467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.318 [2024-12-06 13:53:59.492481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:110920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.318 [2024-12-06 13:53:59.492493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.318 [2024-12-06 13:53:59.492507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:110928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.318 [2024-12-06 13:53:59.492519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.318 [2024-12-06 13:53:59.492557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:110936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.318 [2024-12-06 13:53:59.492571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.318 [2024-12-06 13:53:59.492584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:110944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.318 [2024-12-06 13:53:59.492597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.318 [2024-12-06 13:53:59.492610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:110952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.318 [2024-12-06 13:53:59.492623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.318 [2024-12-06 13:53:59.492662] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:11.318 [2024-12-06 13:53:59.492676] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:11.318 [2024-12-06 13:53:59.492686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110960 len:8 PRP1 0x0 PRP2 0x0 00:15:11.318 [2024-12-06 13:53:59.492698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.318 [2024-12-06 13:53:59.492757] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.3:4421 to 10.0.0.3:4422 00:15:11.318 [2024-12-06 13:53:59.492812] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:11.318 [2024-12-06 13:53:59.492832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.318 [2024-12-06 13:53:59.492846] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:11.318 [2024-12-06 13:53:59.492859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.318 [2024-12-06 13:53:59.492871] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:11.318 [2024-12-06 13:53:59.492883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.318 [2024-12-06 13:53:59.492897] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:11.318 [2024-12-06 13:53:59.492909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.318 [2024-12-06 13:53:59.492921] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:15:11.318 [2024-12-06 13:53:59.492954] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce6c60 (9): Bad file descriptor 00:15:11.318 [2024-12-06 13:53:59.496640] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:15:11.318 [2024-12-06 13:53:59.518703] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:15:11.318 9470.60 IOPS, 36.99 MiB/s [2024-12-06T13:54:10.722Z] 9479.83 IOPS, 37.03 MiB/s [2024-12-06T13:54:10.722Z] 9491.57 IOPS, 37.08 MiB/s [2024-12-06T13:54:10.722Z] 9632.12 IOPS, 37.63 MiB/s [2024-12-06T13:54:10.722Z] 9725.44 IOPS, 37.99 MiB/s [2024-12-06T13:54:10.722Z] [2024-12-06 13:54:04.061703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:89968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.318 [2024-12-06 13:54:04.061764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.318 [2024-12-06 13:54:04.061790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:89976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.318 [2024-12-06 13:54:04.061833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.318 [2024-12-06 13:54:04.061850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:89984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.318 [2024-12-06 13:54:04.061862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.318 [2024-12-06 13:54:04.061876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:89992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.318 [2024-12-06 13:54:04.061889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.319 [2024-12-06 13:54:04.061902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:90000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.319 [2024-12-06 13:54:04.061915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.319 [2024-12-06 13:54:04.061929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:90008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.319 [2024-12-06 13:54:04.061941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.319 [2024-12-06 13:54:04.061954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:90016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.319 [2024-12-06 13:54:04.061967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.319 [2024-12-06 13:54:04.061980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:90024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.319 [2024-12-06 13:54:04.061992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.319 [2024-12-06 13:54:04.062006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:90032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.319 [2024-12-06 13:54:04.062018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.319 [2024-12-06 13:54:04.062032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:90456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.319 [2024-12-06 13:54:04.062044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.319 [2024-12-06 13:54:04.062058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:90464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.319 [2024-12-06 13:54:04.062070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.319 [2024-12-06 13:54:04.062083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:90472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.319 [2024-12-06 13:54:04.062125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.319 [2024-12-06 13:54:04.062142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:90480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.319 [2024-12-06 13:54:04.062155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.319 [2024-12-06 13:54:04.062169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:90488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.319 [2024-12-06 13:54:04.062182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.319 [2024-12-06 13:54:04.062204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:90496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.319 [2024-12-06 13:54:04.062218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.319 [2024-12-06 13:54:04.062232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:90504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.319 [2024-12-06 13:54:04.062245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.319 [2024-12-06 13:54:04.062259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:90512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.319 [2024-12-06 13:54:04.062272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.319 [2024-12-06 13:54:04.062289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:90520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.319 [2024-12-06 13:54:04.062303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.319 [2024-12-06 13:54:04.062317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:90528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.319 [2024-12-06 13:54:04.062346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.319 [2024-12-06 13:54:04.062363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:90536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.319 [2024-12-06 13:54:04.062376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.319 [2024-12-06 13:54:04.062391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:90544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.319 [2024-12-06 13:54:04.062404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.319 [2024-12-06 13:54:04.062418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:90552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.319 [2024-12-06 13:54:04.062447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.319 [2024-12-06 13:54:04.062462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:90560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.319 [2024-12-06 13:54:04.062475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.319 [2024-12-06 13:54:04.062490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:90568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.319 [2024-12-06 13:54:04.062503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.319 [2024-12-06 13:54:04.062519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:90576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.319 [2024-12-06 13:54:04.062533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.319 [2024-12-06 13:54:04.062547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:90584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.319 [2024-12-06 13:54:04.062561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.319 [2024-12-06 13:54:04.062575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:90592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.319 [2024-12-06 13:54:04.062595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.319 [2024-12-06 13:54:04.062611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:90040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.319 [2024-12-06 13:54:04.062625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.319 [2024-12-06 13:54:04.062639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:90048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.319 [2024-12-06 13:54:04.062653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.319 [2024-12-06 13:54:04.062668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:90056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.319 [2024-12-06 13:54:04.062682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.319 [2024-12-06 13:54:04.062696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:90064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.319 [2024-12-06 13:54:04.062725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.319 [2024-12-06 13:54:04.062740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:90072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.319 [2024-12-06 13:54:04.062753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.319 [2024-12-06 13:54:04.062785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:90080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.319 [2024-12-06 13:54:04.062799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.319 [2024-12-06 13:54:04.062814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:90088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.319 [2024-12-06 13:54:04.062828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.320 [2024-12-06 13:54:04.062857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:90096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.320 [2024-12-06 13:54:04.062871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.320 [2024-12-06 13:54:04.062885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:90104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.320 [2024-12-06 13:54:04.062898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.320 [2024-12-06 13:54:04.062913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:90112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.320 [2024-12-06 13:54:04.062926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.320 [2024-12-06 13:54:04.062940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:90120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.320 [2024-12-06 13:54:04.062953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.320 [2024-12-06 13:54:04.062968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:90128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.320 [2024-12-06 13:54:04.062981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.320 [2024-12-06 13:54:04.062995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:90136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.320 [2024-12-06 13:54:04.063014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.320 [2024-12-06 13:54:04.063045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:90144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.320 [2024-12-06 13:54:04.063058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.320 [2024-12-06 13:54:04.063072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:90152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.320 [2024-12-06 13:54:04.063084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.320 [2024-12-06 13:54:04.063099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:90160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.320 [2024-12-06 13:54:04.063111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.320 [2024-12-06 13:54:04.063125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:90600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.320 [2024-12-06 13:54:04.063139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.320 [2024-12-06 13:54:04.063153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:90608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.320 [2024-12-06 13:54:04.063166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.320 [2024-12-06 13:54:04.063180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:90616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.320 [2024-12-06 13:54:04.063203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.320 [2024-12-06 13:54:04.063218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:90624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.320 [2024-12-06 13:54:04.063231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.320 [2024-12-06 13:54:04.063245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:90632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.320 [2024-12-06 13:54:04.063258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.320 [2024-12-06 13:54:04.063272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:90640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.320 [2024-12-06 13:54:04.063285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.320 [2024-12-06 13:54:04.063300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:90648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.320 [2024-12-06 13:54:04.063312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.320 [2024-12-06 13:54:04.063327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:90656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.320 [2024-12-06 13:54:04.063339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.320 [2024-12-06 13:54:04.063354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:90664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.320 [2024-12-06 13:54:04.063366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.320 [2024-12-06 13:54:04.063387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:90672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.320 [2024-12-06 13:54:04.063400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.320 [2024-12-06 13:54:04.063426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:90168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.320 [2024-12-06 13:54:04.063441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.320 [2024-12-06 13:54:04.063455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:90176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.320 [2024-12-06 13:54:04.063468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.320 [2024-12-06 13:54:04.063482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:90184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.320 [2024-12-06 13:54:04.063495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.320 [2024-12-06 13:54:04.063509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:90192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.320 [2024-12-06 13:54:04.063521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.320 [2024-12-06 13:54:04.063535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:90200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.320 [2024-12-06 13:54:04.063548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.320 [2024-12-06 13:54:04.063562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:90208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.320 [2024-12-06 13:54:04.063575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.320 [2024-12-06 13:54:04.063589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:90216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.320 [2024-12-06 13:54:04.063601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.321 [2024-12-06 13:54:04.063615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:90224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.321 [2024-12-06 13:54:04.063628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.321 [2024-12-06 13:54:04.063642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:90680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.321 [2024-12-06 13:54:04.063655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.321 [2024-12-06 13:54:04.063669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:90688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.321 [2024-12-06 13:54:04.063681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.321 [2024-12-06 13:54:04.063695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:90696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.321 [2024-12-06 13:54:04.063708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.321 [2024-12-06 13:54:04.063723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:90704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.321 [2024-12-06 13:54:04.063742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.321 [2024-12-06 13:54:04.063758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.321 [2024-12-06 13:54:04.063771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.321 [2024-12-06 13:54:04.063785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:90720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.321 [2024-12-06 13:54:04.063798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.321 [2024-12-06 13:54:04.063812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:90728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.321 [2024-12-06 13:54:04.063824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.321 [2024-12-06 13:54:04.063838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:90736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.321 [2024-12-06 13:54:04.063851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.321 [2024-12-06 13:54:04.063865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:90744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.321 [2024-12-06 13:54:04.063878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.321 [2024-12-06 13:54:04.063892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:90752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.321 [2024-12-06 13:54:04.063904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.321 [2024-12-06 13:54:04.063918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:90760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.321 [2024-12-06 13:54:04.063931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.321 [2024-12-06 13:54:04.063945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:90768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.321 [2024-12-06 13:54:04.063957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.321 [2024-12-06 13:54:04.063971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:90776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.321 [2024-12-06 13:54:04.063984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.321 [2024-12-06 13:54:04.063998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:90784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.321 [2024-12-06 13:54:04.064011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.321 [2024-12-06 13:54:04.064025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:90792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.321 [2024-12-06 13:54:04.064038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.321 [2024-12-06 13:54:04.064052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:90800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.321 [2024-12-06 13:54:04.064064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.321 [2024-12-06 13:54:04.064085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:90808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.321 [2024-12-06 13:54:04.064107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.321 [2024-12-06 13:54:04.064124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:90816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.321 [2024-12-06 13:54:04.064137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.321 [2024-12-06 13:54:04.064151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:90824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.321 [2024-12-06 13:54:04.064164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.321 [2024-12-06 13:54:04.064179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:90832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.321 [2024-12-06 13:54:04.064192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.321 [2024-12-06 13:54:04.064207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:90840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.321 [2024-12-06 13:54:04.064220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.321 [2024-12-06 13:54:04.064234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:90848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.321 [2024-12-06 13:54:04.064247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.321 [2024-12-06 13:54:04.064261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:90856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.321 [2024-12-06 13:54:04.064273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.321 [2024-12-06 13:54:04.064288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:90232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.321 [2024-12-06 13:54:04.064301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.321 [2024-12-06 13:54:04.064315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:90240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.321 [2024-12-06 13:54:04.064327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.321 [2024-12-06 13:54:04.064342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:90248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.321 [2024-12-06 13:54:04.064354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.321 [2024-12-06 13:54:04.064368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:90256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.321 [2024-12-06 13:54:04.064380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.321 [2024-12-06 13:54:04.064394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:90264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.321 [2024-12-06 13:54:04.064407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.321 [2024-12-06 13:54:04.064421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:90272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.321 [2024-12-06 13:54:04.064434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.321 [2024-12-06 13:54:04.064455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:90280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.322 [2024-12-06 13:54:04.064468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.322 [2024-12-06 13:54:04.064482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:90288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.322 [2024-12-06 13:54:04.064495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.322 [2024-12-06 13:54:04.064509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:90864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.322 [2024-12-06 13:54:04.064522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.322 [2024-12-06 13:54:04.064536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:90872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.322 [2024-12-06 13:54:04.064549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.322 [2024-12-06 13:54:04.064563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:90880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.322 [2024-12-06 13:54:04.064576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.322 [2024-12-06 13:54:04.064590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:90888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.322 [2024-12-06 13:54:04.064603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.322 [2024-12-06 13:54:04.064616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:90896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.322 [2024-12-06 13:54:04.064629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.322 [2024-12-06 13:54:04.064644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:90904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.322 [2024-12-06 13:54:04.064657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.322 [2024-12-06 13:54:04.064671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:90912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.322 [2024-12-06 13:54:04.064684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.322 [2024-12-06 13:54:04.064698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:90920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.322 [2024-12-06 13:54:04.064711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.322 [2024-12-06 13:54:04.064726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:90928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.322 [2024-12-06 13:54:04.064739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.322 [2024-12-06 13:54:04.064753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:90936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.322 [2024-12-06 13:54:04.064765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.322 [2024-12-06 13:54:04.064779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:90944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.322 [2024-12-06 13:54:04.064798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.322 [2024-12-06 13:54:04.064812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:90952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.322 [2024-12-06 13:54:04.064825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.322 [2024-12-06 13:54:04.064840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:90960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.322 [2024-12-06 13:54:04.064852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.322 [2024-12-06 13:54:04.064867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:90968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.322 [2024-12-06 13:54:04.064879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.322 [2024-12-06 13:54:04.064893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:90976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.322 [2024-12-06 13:54:04.064906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.322 [2024-12-06 13:54:04.064920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:90984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:11.322 [2024-12-06 13:54:04.064932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.322 [2024-12-06 13:54:04.064946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:90296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.322 [2024-12-06 13:54:04.064959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.322 [2024-12-06 13:54:04.064973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:90304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.322 [2024-12-06 13:54:04.064986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.322 [2024-12-06 13:54:04.065000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:90312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.322 [2024-12-06 13:54:04.065013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.322 [2024-12-06 13:54:04.065026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:90320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.322 [2024-12-06 13:54:04.065039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.322 [2024-12-06 13:54:04.065053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:90328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.322 [2024-12-06 13:54:04.065066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.322 [2024-12-06 13:54:04.065080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:90336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.322 [2024-12-06 13:54:04.065093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.322 [2024-12-06 13:54:04.065121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:90344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.322 [2024-12-06 13:54:04.065135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.322 [2024-12-06 13:54:04.065157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:90352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.322 [2024-12-06 13:54:04.065172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.322 [2024-12-06 13:54:04.065186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:90360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.322 [2024-12-06 13:54:04.065199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.322 [2024-12-06 13:54:04.065214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:90368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.322 [2024-12-06 13:54:04.065226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.322 [2024-12-06 13:54:04.065240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:90376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.322 [2024-12-06 13:54:04.065253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.322 [2024-12-06 13:54:04.065267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:90384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.322 [2024-12-06 13:54:04.065281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.322 [2024-12-06 13:54:04.065295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:90392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.323 [2024-12-06 13:54:04.065308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.323 [2024-12-06 13:54:04.065322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:90400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.323 [2024-12-06 13:54:04.065335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.323 [2024-12-06 13:54:04.065349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:90408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.323 [2024-12-06 13:54:04.065362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.323 [2024-12-06 13:54:04.065376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:90416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.323 [2024-12-06 13:54:04.065389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.323 [2024-12-06 13:54:04.065403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:90424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.323 [2024-12-06 13:54:04.065416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.323 [2024-12-06 13:54:04.065430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:90432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.323 [2024-12-06 13:54:04.065442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.323 [2024-12-06 13:54:04.065456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:90440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:11.323 [2024-12-06 13:54:04.065469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.323 [2024-12-06 13:54:04.065518] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:11.323 [2024-12-06 13:54:04.065532] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:11.323 [2024-12-06 13:54:04.065550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90448 len:8 PRP1 0x0 PRP2 0x0 00:15:11.323 [2024-12-06 13:54:04.065564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.323 [2024-12-06 13:54:04.065627] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.3:4422 to 10.0.0.3:4420 00:15:11.323 [2024-12-06 13:54:04.065682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:11.323 [2024-12-06 13:54:04.065704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.323 [2024-12-06 13:54:04.065718] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:11.323 [2024-12-06 13:54:04.065731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.323 [2024-12-06 13:54:04.065744] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:11.323 [2024-12-06 13:54:04.065756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.323 [2024-12-06 13:54:04.065770] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:11.323 [2024-12-06 13:54:04.065782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.323 [2024-12-06 13:54:04.065794] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:15:11.323 [2024-12-06 13:54:04.069288] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:15:11.323 [2024-12-06 13:54:04.069327] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce6c60 (9): Bad file descriptor 00:15:11.323 [2024-12-06 13:54:04.100279] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:15:11.323 9757.10 IOPS, 38.11 MiB/s [2024-12-06T13:54:10.727Z] 9804.27 IOPS, 38.30 MiB/s [2024-12-06T13:54:10.727Z] 9848.25 IOPS, 38.47 MiB/s [2024-12-06T13:54:10.727Z] 9866.38 IOPS, 38.54 MiB/s [2024-12-06T13:54:10.727Z] 9863.79 IOPS, 38.53 MiB/s [2024-12-06T13:54:10.727Z] 9863.93 IOPS, 38.53 MiB/s 00:15:11.323 Latency(us) 00:15:11.323 [2024-12-06T13:54:10.727Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:11.323 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:11.323 Verification LBA range: start 0x0 length 0x4000 00:15:11.323 NVMe0n1 : 15.01 9865.16 38.54 230.60 0.00 12649.68 618.12 17754.30 00:15:11.323 [2024-12-06T13:54:10.727Z] =================================================================================================================== 00:15:11.323 [2024-12-06T13:54:10.727Z] Total : 9865.16 38.54 230.60 0.00 12649.68 618.12 17754.30 00:15:11.323 Received shutdown signal, test time was about 15.000000 seconds 00:15:11.323 00:15:11.323 Latency(us) 00:15:11.323 [2024-12-06T13:54:10.727Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:11.323 [2024-12-06T13:54:10.727Z] =================================================================================================================== 00:15:11.323 [2024-12-06T13:54:10.727Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:11.323 13:54:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:15:11.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:11.323 13:54:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:15:11.323 13:54:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:15:11.323 13:54:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:15:11.323 13:54:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=75408 00:15:11.323 13:54:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 75408 /var/tmp/bdevperf.sock 00:15:11.323 13:54:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 75408 ']' 00:15:11.323 13:54:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:11.323 13:54:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:11.323 13:54:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:11.323 13:54:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:11.323 13:54:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:11.323 13:54:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:11.323 13:54:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:15:11.323 13:54:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:15:11.323 [2024-12-06 13:54:10.594932] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:15:11.323 13:54:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:15:11.583 [2024-12-06 13:54:10.842594] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:15:11.583 13:54:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:15:11.846 NVMe0n1 00:15:11.846 13:54:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:15:12.105 00:15:12.384 13:54:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:15:12.384 00:15:12.643 13:54:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:12.643 13:54:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:15:12.903 13:54:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:13.162 13:54:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:15:16.450 13:54:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:16.450 13:54:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:15:16.450 13:54:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=75476 00:15:16.450 13:54:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:16.450 13:54:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 75476 00:15:17.387 { 00:15:17.387 "results": [ 00:15:17.387 { 00:15:17.387 "job": "NVMe0n1", 00:15:17.387 "core_mask": "0x1", 00:15:17.387 "workload": "verify", 00:15:17.387 "status": "finished", 00:15:17.387 "verify_range": { 00:15:17.387 "start": 0, 00:15:17.387 "length": 16384 00:15:17.387 }, 00:15:17.388 "queue_depth": 128, 00:15:17.388 "io_size": 4096, 00:15:17.388 "runtime": 1.007951, 00:15:17.388 "iops": 7767.242653660744, 00:15:17.388 "mibps": 30.34079161586228, 00:15:17.388 "io_failed": 0, 00:15:17.388 "io_timeout": 0, 00:15:17.388 "avg_latency_us": 16414.771723777565, 00:15:17.388 "min_latency_us": 2234.181818181818, 00:15:17.388 "max_latency_us": 15728.64 00:15:17.388 } 00:15:17.388 ], 00:15:17.388 "core_count": 1 00:15:17.388 } 00:15:17.388 13:54:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:17.388 [2024-12-06 13:54:10.034515] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:15:17.388 [2024-12-06 13:54:10.034621] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75408 ] 00:15:17.388 [2024-12-06 13:54:10.172924] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:17.388 [2024-12-06 13:54:10.219920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:17.388 [2024-12-06 13:54:10.276980] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:17.388 [2024-12-06 13:54:12.313999] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:15:17.388 [2024-12-06 13:54:12.314134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:17.388 [2024-12-06 13:54:12.314160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:17.388 [2024-12-06 13:54:12.314176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:17.388 [2024-12-06 13:54:12.314189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:17.388 [2024-12-06 13:54:12.314203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:17.388 [2024-12-06 13:54:12.314215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:17.388 [2024-12-06 13:54:12.314228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:17.388 [2024-12-06 13:54:12.314241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:17.388 [2024-12-06 13:54:12.314254] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:15:17.388 [2024-12-06 13:54:12.314302] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:15:17.388 [2024-12-06 13:54:12.314332] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99fc60 (9): Bad file descriptor 00:15:17.388 [2024-12-06 13:54:12.325154] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:15:17.388 Running I/O for 1 seconds... 00:15:17.388 7701.00 IOPS, 30.08 MiB/s 00:15:17.388 Latency(us) 00:15:17.388 [2024-12-06T13:54:16.792Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:17.388 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:17.388 Verification LBA range: start 0x0 length 0x4000 00:15:17.388 NVMe0n1 : 1.01 7767.24 30.34 0.00 0.00 16414.77 2234.18 15728.64 00:15:17.388 [2024-12-06T13:54:16.792Z] =================================================================================================================== 00:15:17.388 [2024-12-06T13:54:16.792Z] Total : 7767.24 30.34 0.00 0.00 16414.77 2234.18 15728.64 00:15:17.388 13:54:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:17.388 13:54:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:15:17.647 13:54:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:17.906 13:54:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:17.906 13:54:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:15:18.163 13:54:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:18.421 13:54:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:15:21.730 13:54:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:21.730 13:54:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:15:21.730 13:54:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 75408 00:15:21.730 13:54:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 75408 ']' 00:15:21.730 13:54:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 75408 00:15:21.730 13:54:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:15:21.730 13:54:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:21.730 13:54:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75408 00:15:21.730 13:54:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:21.730 killing process with pid 75408 00:15:21.730 13:54:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:21.730 13:54:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75408' 00:15:21.730 13:54:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 75408 00:15:21.730 13:54:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 75408 00:15:21.989 13:54:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:15:21.989 13:54:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:22.249 13:54:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:15:22.249 13:54:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:22.249 13:54:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:15:22.249 13:54:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:22.249 13:54:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:15:22.249 13:54:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:22.249 13:54:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:15:22.249 13:54:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:22.249 13:54:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:22.249 rmmod nvme_tcp 00:15:22.249 rmmod nvme_fabrics 00:15:22.249 rmmod nvme_keyring 00:15:22.249 13:54:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:22.249 13:54:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:15:22.249 13:54:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:15:22.249 13:54:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 75156 ']' 00:15:22.249 13:54:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 75156 00:15:22.249 13:54:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 75156 ']' 00:15:22.249 13:54:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 75156 00:15:22.249 13:54:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:15:22.249 13:54:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:22.249 13:54:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75156 00:15:22.249 13:54:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:22.249 13:54:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:22.249 13:54:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75156' 00:15:22.249 killing process with pid 75156 00:15:22.249 13:54:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 75156 00:15:22.249 13:54:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 75156 00:15:22.508 13:54:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:22.508 13:54:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:22.508 13:54:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:22.508 13:54:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:15:22.508 13:54:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:22.508 13:54:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:15:22.508 13:54:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:15:22.508 13:54:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:22.508 13:54:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:22.508 13:54:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:22.508 13:54:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:22.508 13:54:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:22.508 13:54:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:22.508 13:54:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:22.508 13:54:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:22.508 13:54:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:22.508 13:54:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:22.508 13:54:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:22.508 13:54:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:22.508 13:54:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:22.508 13:54:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:22.768 13:54:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:22.768 13:54:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:22.768 13:54:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:22.768 13:54:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:22.768 13:54:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:22.768 13:54:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0 00:15:22.768 ************************************ 00:15:22.768 END TEST nvmf_failover 00:15:22.768 ************************************ 00:15:22.768 00:15:22.768 real 0m31.556s 00:15:22.768 user 2m1.338s 00:15:22.768 sys 0m5.356s 00:15:22.768 13:54:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:22.768 13:54:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:22.768 13:54:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:15:22.768 13:54:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:22.768 13:54:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:22.768 13:54:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:22.768 ************************************ 00:15:22.768 START TEST nvmf_host_discovery 00:15:22.768 ************************************ 00:15:22.768 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:15:22.768 * Looking for test storage... 00:15:22.768 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:22.768 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:22.768 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:15:22.768 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:23.028 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:23.028 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:23.028 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:23.028 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:23.028 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:15:23.028 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:15:23.028 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:15:23.028 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:15:23.028 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:15:23.028 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:15:23.028 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:15:23.028 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:23.028 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:15:23.028 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:15:23.028 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:23.028 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:23.028 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:15:23.029 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:15:23.029 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:23.029 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:15:23.029 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:15:23.029 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:15:23.029 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:15:23.029 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:23.029 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:15:23.029 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:15:23.029 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:23.029 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:23.029 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:15:23.029 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:23.029 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:23.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:23.029 --rc genhtml_branch_coverage=1 00:15:23.029 --rc genhtml_function_coverage=1 00:15:23.029 --rc genhtml_legend=1 00:15:23.029 --rc geninfo_all_blocks=1 00:15:23.029 --rc geninfo_unexecuted_blocks=1 00:15:23.029 00:15:23.029 ' 00:15:23.029 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:23.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:23.029 --rc genhtml_branch_coverage=1 00:15:23.029 --rc genhtml_function_coverage=1 00:15:23.029 --rc genhtml_legend=1 00:15:23.029 --rc geninfo_all_blocks=1 00:15:23.029 --rc geninfo_unexecuted_blocks=1 00:15:23.029 00:15:23.029 ' 00:15:23.029 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:23.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:23.029 --rc genhtml_branch_coverage=1 00:15:23.029 --rc genhtml_function_coverage=1 00:15:23.029 --rc genhtml_legend=1 00:15:23.029 --rc geninfo_all_blocks=1 00:15:23.029 --rc geninfo_unexecuted_blocks=1 00:15:23.029 00:15:23.029 ' 00:15:23.029 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:23.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:23.029 --rc genhtml_branch_coverage=1 00:15:23.029 --rc genhtml_function_coverage=1 00:15:23.029 --rc genhtml_legend=1 00:15:23.029 --rc geninfo_all_blocks=1 00:15:23.029 --rc geninfo_unexecuted_blocks=1 00:15:23.029 00:15:23.029 ' 00:15:23.029 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:23.029 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:15:23.029 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:23.029 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:23.029 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:23.029 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:23.029 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:23.029 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:23.029 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:23.029 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:23.029 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:23.029 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:23.029 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:15:23.029 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=cfa2def7-c8af-457f-82a0-b312efdea7f4 00:15:23.029 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:23.029 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:23.029 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:23.029 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:23.029 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:23.029 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:15:23.029 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:23.029 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:23.029 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:23.029 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.029 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.029 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.029 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:15:23.029 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.029 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:15:23.029 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:23.029 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:23.029 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:23.029 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:23.029 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:23.029 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:23.029 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:23.029 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:23.030 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:23.030 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:23.030 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:15:23.030 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:15:23.030 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:15:23.030 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:15:23.030 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:15:23.030 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:15:23.030 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:15:23.030 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:23.030 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:23.030 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:23.030 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:23.030 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:23.030 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:23.030 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:23.030 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:23.030 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:23.030 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:23.030 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:23.030 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:23.030 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:23.030 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:23.030 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:23.030 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:23.030 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:23.030 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:23.030 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:23.030 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:23.030 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:23.030 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:23.030 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:23.030 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:23.030 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:23.030 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:23.030 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:23.030 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:23.030 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:23.030 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:23.030 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:23.030 Cannot find device "nvmf_init_br" 00:15:23.030 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:15:23.030 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:23.030 Cannot find device "nvmf_init_br2" 00:15:23.030 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:15:23.030 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:23.030 Cannot find device "nvmf_tgt_br" 00:15:23.030 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # true 00:15:23.030 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:23.030 Cannot find device "nvmf_tgt_br2" 00:15:23.030 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # true 00:15:23.030 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:23.030 Cannot find device "nvmf_init_br" 00:15:23.030 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # true 00:15:23.030 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:23.030 Cannot find device "nvmf_init_br2" 00:15:23.030 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # true 00:15:23.030 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:23.030 Cannot find device "nvmf_tgt_br" 00:15:23.030 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # true 00:15:23.030 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:23.030 Cannot find device "nvmf_tgt_br2" 00:15:23.030 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # true 00:15:23.030 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:23.030 Cannot find device "nvmf_br" 00:15:23.030 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # true 00:15:23.030 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:23.030 Cannot find device "nvmf_init_if" 00:15:23.030 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # true 00:15:23.030 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:23.030 Cannot find device "nvmf_init_if2" 00:15:23.030 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # true 00:15:23.030 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:23.030 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:23.030 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # true 00:15:23.030 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:23.030 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:23.030 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # true 00:15:23.030 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:23.030 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:23.030 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:23.030 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:23.030 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:23.030 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:23.030 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:23.030 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:23.290 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:23.290 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:23.290 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:23.290 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:23.290 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:23.290 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:23.290 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:23.290 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:23.290 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:23.290 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:23.290 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:23.290 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:23.290 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:23.290 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:23.290 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:23.290 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:23.290 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:23.290 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:23.290 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:23.290 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:23.290 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:23.290 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:23.290 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:23.290 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:23.290 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:23.290 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:23.290 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:15:23.290 00:15:23.290 --- 10.0.0.3 ping statistics --- 00:15:23.290 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:23.290 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:15:23.290 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:23.290 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:23.291 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:15:23.291 00:15:23.291 --- 10.0.0.4 ping statistics --- 00:15:23.291 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:23.291 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:15:23.291 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:23.291 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:23.291 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:15:23.291 00:15:23.291 --- 10.0.0.1 ping statistics --- 00:15:23.291 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:23.291 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:15:23.291 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:23.291 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:23.291 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:15:23.291 00:15:23.291 --- 10.0.0.2 ping statistics --- 00:15:23.291 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:23.291 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:15:23.291 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:23.291 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@461 -- # return 0 00:15:23.291 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:23.291 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:23.291 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:23.291 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:23.291 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:23.291 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:23.291 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:23.291 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:15:23.291 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:23.291 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:23.291 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:23.291 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=75805 00:15:23.291 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:23.291 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 75805 00:15:23.291 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 75805 ']' 00:15:23.291 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:23.291 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:23.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:23.291 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:23.291 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:23.291 13:54:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:23.291 [2024-12-06 13:54:22.666337] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:15:23.291 [2024-12-06 13:54:22.666412] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:23.550 [2024-12-06 13:54:22.805199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:23.550 [2024-12-06 13:54:22.860670] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:23.550 [2024-12-06 13:54:22.860733] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:23.550 [2024-12-06 13:54:22.860744] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:23.550 [2024-12-06 13:54:22.860752] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:23.550 [2024-12-06 13:54:22.860758] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:23.550 [2024-12-06 13:54:22.861219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:23.550 [2024-12-06 13:54:22.919371] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:24.487 13:54:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:24.487 13:54:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:15:24.487 13:54:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:24.487 13:54:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:24.488 13:54:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.488 13:54:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:24.488 13:54:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:24.488 13:54:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.488 13:54:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.488 [2024-12-06 13:54:23.683461] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:24.488 13:54:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.488 13:54:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:15:24.488 13:54:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.488 13:54:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.488 [2024-12-06 13:54:23.695599] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:15:24.488 13:54:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.488 13:54:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:15:24.488 13:54:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.488 13:54:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.488 null0 00:15:24.488 13:54:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.488 13:54:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:15:24.488 13:54:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.488 13:54:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.488 null1 00:15:24.488 13:54:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.488 13:54:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:15:24.488 13:54:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.488 13:54:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.488 13:54:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.488 13:54:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=75837 00:15:24.488 13:54:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:15:24.488 13:54:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 75837 /tmp/host.sock 00:15:24.488 13:54:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 75837 ']' 00:15:24.488 13:54:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:15:24.488 13:54:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:24.488 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:15:24.488 13:54:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:15:24.488 13:54:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:24.488 13:54:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.488 [2024-12-06 13:54:23.786364] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:15:24.488 [2024-12-06 13:54:23.786456] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75837 ] 00:15:24.747 [2024-12-06 13:54:23.939000] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:24.747 [2024-12-06 13:54:24.005320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:24.747 [2024-12-06 13:54:24.069070] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:24.747 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:24.747 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:15:24.747 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:24.747 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:15:24.747 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.747 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:25.007 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.007 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:15:25.007 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.007 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:25.007 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.007 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:15:25.007 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:15:25.007 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:25.007 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.007 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:25.007 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:25.007 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:25.007 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:25.007 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.007 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:15:25.007 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:15:25.007 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:25.007 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.007 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:25.007 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:25.007 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:25.007 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:25.007 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.007 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:15:25.007 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:15:25.008 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.008 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:25.008 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.008 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:15:25.008 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:25.008 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.008 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:25.008 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:25.008 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:25.008 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:25.008 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.008 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:15:25.008 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:15:25.008 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:25.008 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:25.008 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.008 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:25.008 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:25.008 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:25.008 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.008 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:15:25.008 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:15:25.008 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.008 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:25.008 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.008 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:15:25.008 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:25.008 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:25.008 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:25.008 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:25.008 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.008 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:25.268 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.268 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:15:25.268 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:15:25.268 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:25.268 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:25.268 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:25.268 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:25.268 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.268 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:25.268 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.268 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:15:25.268 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:15:25.268 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.268 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:25.268 [2024-12-06 13:54:24.523772] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:25.268 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.268 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:15:25.268 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:25.268 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:25.268 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.268 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:25.268 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:25.268 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:25.268 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.268 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:15:25.268 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:15:25.268 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:25.268 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:25.268 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:25.268 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.268 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:25.268 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:25.268 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.268 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:15:25.268 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:15:25.268 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:15:25.268 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:25.268 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:25.268 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:15:25.268 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:25.268 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:25.268 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:15:25.268 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:15:25.268 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:25.268 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.268 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:25.268 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.528 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:15:25.528 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:15:25.528 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:15:25.528 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:15:25.528 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:15:25.528 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.528 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:25.528 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.528 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:25.528 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:25.528 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:15:25.528 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:25.528 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:15:25.528 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:15:25.528 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:25.528 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.528 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:25.528 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:25.528 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:25.528 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:25.528 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.528 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:15:25.528 13:54:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:15:25.787 [2024-12-06 13:54:25.168958] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:15:25.787 [2024-12-06 13:54:25.168987] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:15:25.787 [2024-12-06 13:54:25.169018] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:15:25.787 [2024-12-06 13:54:25.175031] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:15:26.046 [2024-12-06 13:54:25.229560] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:15:26.046 [2024-12-06 13:54:25.230702] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x91adc0:1 started. 00:15:26.046 [2024-12-06 13:54:25.232460] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:15:26.046 [2024-12-06 13:54:25.232483] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:15:26.046 [2024-12-06 13:54:25.237623] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x91adc0 was disconnected and freed. delete nvme_qpair. 00:15:26.613 13:54:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:26.613 13:54:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:15:26.613 13:54:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:15:26.613 13:54:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:26.613 13:54:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.614 13:54:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:26.614 13:54:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:26.614 13:54:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:26.614 13:54:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:26.614 13:54:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.614 13:54:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:26.614 13:54:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:15:26.614 13:54:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:15:26.614 13:54:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:15:26.614 13:54:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:15:26.614 13:54:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:26.614 13:54:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:15:26.614 13:54:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:15:26.614 13:54:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:26.614 13:54:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:26.614 13:54:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:26.614 13:54:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.614 13:54:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:26.614 13:54:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:26.614 13:54:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.614 13:54:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:15:26.614 13:54:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:15:26.614 13:54:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:15:26.614 13:54:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:15:26.614 13:54:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:15:26.614 13:54:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:26.614 13:54:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:15:26.614 13:54:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:15:26.614 13:54:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:15:26.614 13:54:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:15:26.614 13:54:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:15:26.614 13:54:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.614 13:54:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:15:26.614 13:54:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:26.614 13:54:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.614 13:54:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:15:26.614 13:54:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:15:26.614 13:54:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:15:26.614 13:54:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:15:26.614 13:54:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:26.614 13:54:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:26.614 13:54:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:15:26.614 13:54:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:26.614 13:54:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:26.614 13:54:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:15:26.614 13:54:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:15:26.614 13:54:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.614 13:54:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:26.614 13:54:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:26.614 13:54:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.614 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:15:26.614 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:15:26.614 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:15:26.614 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:15:26.614 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:15:26.614 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.614 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:26.874 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.874 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:26.874 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:26.874 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:15:26.874 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:26.874 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:15:26.874 [2024-12-06 13:54:26.021818] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x9290b0:1 started. 00:15:26.874 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:15:26.874 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:26.874 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:26.874 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.874 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:26.874 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:26.874 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:26.874 [2024-12-06 13:54:26.028837] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x9290b0 was disconnected and freed. delete nvme_qpair. 00:15:26.874 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.874 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:26.874 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:15:26.874 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:15:26.874 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:15:26.874 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:26.874 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:26.874 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:15:26.874 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:26.874 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:26.874 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:15:26.874 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:15:26.874 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:26.874 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.874 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:26.874 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.874 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:15:26.874 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:15:26.874 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:15:26.874 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:15:26.874 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:15:26.874 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.874 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:26.874 [2024-12-06 13:54:26.135695] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:15:26.874 [2024-12-06 13:54:26.136023] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:15:26.874 [2024-12-06 13:54:26.136055] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:15:26.874 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.874 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:26.874 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:26.874 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:15:26.874 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:26.874 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:15:26.874 [2024-12-06 13:54:26.142082] bdev_nvme.c:7435:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for nvme0 00:15:26.874 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:15:26.874 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:26.874 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:26.874 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.874 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:26.874 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:26.874 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:26.874 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.874 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:26.874 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:15:26.874 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:26.874 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:26.874 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:15:26.874 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:26.874 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:15:26.874 [2024-12-06 13:54:26.202587] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4421 00:15:26.874 [2024-12-06 13:54:26.202647] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:15:26.874 [2024-12-06 13:54:26.202660] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:15:26.874 [2024-12-06 13:54:26.202666] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:15:26.874 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:15:26.874 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:26.874 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:26.874 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.874 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:26.874 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:26.874 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:26.874 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.874 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:26.874 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:15:26.874 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:15:26.874 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:15:26.874 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:15:26.874 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:26.874 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:15:26.874 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:15:26.874 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:15:26.874 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.874 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:26.874 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:15:26.874 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:15:26.874 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:15:27.134 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.134 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:15:27.134 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:15:27.134 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:15:27.134 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:15:27.134 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:27.134 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:27.134 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:15:27.134 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:27.134 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:27.134 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:15:27.134 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:15:27.134 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:27.134 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.134 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:27.134 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.134 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:15:27.134 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:15:27.134 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:15:27.134 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:15:27.134 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:15:27.134 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.134 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:27.134 [2024-12-06 13:54:26.380327] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:15:27.134 [2024-12-06 13:54:26.380408] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:15:27.134 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.134 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:27.134 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:27.134 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:15:27.134 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:27.134 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:15:27.134 [2024-12-06 13:54:26.386375] bdev_nvme.c:7298:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:15:27.134 [2024-12-06 13:54:26.386425] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:15:27.134 [2024-12-06 13:54:26.386529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:27.134 [2024-12-06 13:54:26.386557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.134 [2024-12-06 13:54:26.386568] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:27.134 [2024-12-06 13:54:26.386592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.134 [2024-12-06 13:54:26.386619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:27.134 [2024-12-06 13:54:26.386667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.134 [2024-12-06 13:54:26.386687] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:27.134 [2024-12-06 13:54:26.386699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.134 [2024-12-06 13:54:26.386713] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f6fb0 is same with the state(6) to be set 00:15:27.134 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:15:27.134 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:27.134 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.134 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:27.134 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:27.134 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:27.134 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:27.134 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.134 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:27.134 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:15:27.134 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:27.134 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:27.134 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:15:27.134 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:27.134 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:15:27.134 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:15:27.134 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:27.134 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:27.134 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.134 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:27.135 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:27.135 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:27.135 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.135 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:27.135 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:15:27.135 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:15:27.135 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:15:27.135 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:15:27.135 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:27.135 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:15:27.135 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:15:27.135 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:15:27.135 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:15:27.135 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.135 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:27.135 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:15:27.135 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:15:27.135 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.395 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:15:27.395 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:15:27.395 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:15:27.395 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:15:27.395 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:27.395 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:27.395 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:15:27.395 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:27.395 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:27.395 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:15:27.395 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:27.395 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:15:27.395 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.395 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:27.395 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.395 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:15:27.395 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:15:27.395 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:15:27.395 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:15:27.395 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:15:27.395 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.395 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:27.395 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.395 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:15:27.395 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:15:27.395 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:15:27.395 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:27.395 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:15:27.395 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:15:27.395 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:27.395 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:27.395 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.395 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:27.395 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:27.395 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:27.395 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.395 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:15:27.395 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:15:27.395 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:15:27.395 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:15:27.395 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:15:27.395 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:27.395 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:15:27.395 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:15:27.395 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:27.395 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:27.395 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.395 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:27.395 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:27.395 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:27.395 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.395 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:15:27.395 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:15:27.395 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:15:27.395 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:15:27.395 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:27.395 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:27.395 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:15:27.395 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:27.395 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:27.395 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:15:27.395 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:15:27.395 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:27.395 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.395 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:27.395 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.655 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:15:27.655 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:15:27.655 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:15:27.655 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:15:27.655 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:27.655 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.655 13:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:28.593 [2024-12-06 13:54:27.819971] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:15:28.593 [2024-12-06 13:54:27.820007] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:15:28.593 [2024-12-06 13:54:27.820041] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:15:28.593 [2024-12-06 13:54:27.825999] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem nvme0 00:15:28.593 [2024-12-06 13:54:27.884324] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.3:4421 00:15:28.593 [2024-12-06 13:54:27.885072] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x927dd0:1 started. 00:15:28.593 [2024-12-06 13:54:27.887056] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:15:28.593 [2024-12-06 13:54:27.887111] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:15:28.593 13:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.593 13:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:28.593 [2024-12-06 13:54:27.888969] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x927dd0 was disconnected and freed. delete nvme_qpair. 00:15:28.593 13:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:15:28.593 13:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:28.593 13:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:28.593 13:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:28.593 13:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:28.593 13:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:28.593 13:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:28.593 13:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.593 13:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:28.593 request: 00:15:28.593 { 00:15:28.593 "name": "nvme", 00:15:28.593 "trtype": "tcp", 00:15:28.593 "traddr": "10.0.0.3", 00:15:28.593 "adrfam": "ipv4", 00:15:28.593 "trsvcid": "8009", 00:15:28.593 "hostnqn": "nqn.2021-12.io.spdk:test", 00:15:28.594 "wait_for_attach": true, 00:15:28.594 "method": "bdev_nvme_start_discovery", 00:15:28.594 "req_id": 1 00:15:28.594 } 00:15:28.594 Got JSON-RPC error response 00:15:28.594 response: 00:15:28.594 { 00:15:28.594 "code": -17, 00:15:28.594 "message": "File exists" 00:15:28.594 } 00:15:28.594 13:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:28.594 13:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:15:28.594 13:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:28.594 13:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:28.594 13:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:28.594 13:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:15:28.594 13:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:15:28.594 13:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:15:28.594 13:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.594 13:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:15:28.594 13:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:28.594 13:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:15:28.594 13:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.594 13:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:15:28.594 13:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:15:28.594 13:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:28.594 13:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.594 13:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:28.594 13:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:28.594 13:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:28.594 13:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:28.855 13:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.855 13:54:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:28.855 13:54:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:28.855 13:54:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:15:28.855 13:54:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:28.855 13:54:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:28.855 13:54:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:28.855 13:54:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:28.855 13:54:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:28.855 13:54:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:28.855 13:54:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.855 13:54:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:28.855 request: 00:15:28.855 { 00:15:28.855 "name": "nvme_second", 00:15:28.855 "trtype": "tcp", 00:15:28.855 "traddr": "10.0.0.3", 00:15:28.855 "adrfam": "ipv4", 00:15:28.855 "trsvcid": "8009", 00:15:28.855 "hostnqn": "nqn.2021-12.io.spdk:test", 00:15:28.855 "wait_for_attach": true, 00:15:28.855 "method": "bdev_nvme_start_discovery", 00:15:28.855 "req_id": 1 00:15:28.855 } 00:15:28.855 Got JSON-RPC error response 00:15:28.855 response: 00:15:28.855 { 00:15:28.855 "code": -17, 00:15:28.855 "message": "File exists" 00:15:28.855 } 00:15:28.855 13:54:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:28.855 13:54:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:15:28.855 13:54:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:28.855 13:54:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:28.855 13:54:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:28.855 13:54:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:15:28.855 13:54:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:15:28.855 13:54:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.855 13:54:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:28.855 13:54:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:15:28.855 13:54:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:15:28.855 13:54:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:15:28.855 13:54:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.855 13:54:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:15:28.855 13:54:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:15:28.855 13:54:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:28.855 13:54:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:28.855 13:54:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.855 13:54:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:28.855 13:54:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:28.855 13:54:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:28.855 13:54:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.855 13:54:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:28.855 13:54:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:15:28.855 13:54:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:15:28.855 13:54:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:15:28.855 13:54:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:28.855 13:54:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:28.855 13:54:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:28.855 13:54:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:28.855 13:54:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:15:28.855 13:54:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.855 13:54:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:29.794 [2024-12-06 13:54:29.147374] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:15:29.794 [2024-12-06 13:54:29.147443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9238d0 with addr=10.0.0.3, port=8010 00:15:29.794 [2024-12-06 13:54:29.147461] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:15:29.794 [2024-12-06 13:54:29.147470] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:15:29.794 [2024-12-06 13:54:29.147477] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:15:31.172 [2024-12-06 13:54:30.147375] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:15:31.172 [2024-12-06 13:54:30.147440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9238d0 with addr=10.0.0.3, port=8010 00:15:31.172 [2024-12-06 13:54:30.147459] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:15:31.172 [2024-12-06 13:54:30.147467] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:15:31.172 [2024-12-06 13:54:30.147475] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:15:32.108 [2024-12-06 13:54:31.147299] bdev_nvme.c:7554:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] timed out while attaching discovery ctrlr 00:15:32.108 request: 00:15:32.108 { 00:15:32.109 "name": "nvme_second", 00:15:32.109 "trtype": "tcp", 00:15:32.109 "traddr": "10.0.0.3", 00:15:32.109 "adrfam": "ipv4", 00:15:32.109 "trsvcid": "8010", 00:15:32.109 "hostnqn": "nqn.2021-12.io.spdk:test", 00:15:32.109 "wait_for_attach": false, 00:15:32.109 "attach_timeout_ms": 3000, 00:15:32.109 "method": "bdev_nvme_start_discovery", 00:15:32.109 "req_id": 1 00:15:32.109 } 00:15:32.109 Got JSON-RPC error response 00:15:32.109 response: 00:15:32.109 { 00:15:32.109 "code": -110, 00:15:32.109 "message": "Connection timed out" 00:15:32.109 } 00:15:32.109 13:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:32.109 13:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:15:32.109 13:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:32.109 13:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:32.109 13:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:32.109 13:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:15:32.109 13:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:15:32.109 13:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.109 13:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:32.109 13:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:15:32.109 13:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:15:32.109 13:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:15:32.109 13:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.109 13:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:15:32.109 13:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:15:32.109 13:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 75837 00:15:32.109 13:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:15:32.109 13:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:32.109 13:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:15:32.109 13:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:32.109 13:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:15:32.109 13:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:32.109 13:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:32.109 rmmod nvme_tcp 00:15:32.109 rmmod nvme_fabrics 00:15:32.109 rmmod nvme_keyring 00:15:32.109 13:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:32.109 13:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:15:32.109 13:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:15:32.109 13:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 75805 ']' 00:15:32.109 13:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 75805 00:15:32.109 13:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 75805 ']' 00:15:32.109 13:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 75805 00:15:32.109 13:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:15:32.109 13:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:32.109 13:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75805 00:15:32.109 13:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:32.109 13:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:32.109 killing process with pid 75805 00:15:32.109 13:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75805' 00:15:32.109 13:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 75805 00:15:32.109 13:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 75805 00:15:32.367 13:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:32.367 13:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:32.367 13:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:32.367 13:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:15:32.367 13:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:15:32.367 13:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:32.367 13:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:15:32.367 13:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:32.367 13:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:32.367 13:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:32.367 13:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:32.367 13:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:32.367 13:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:32.367 13:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:32.367 13:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:32.367 13:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:32.367 13:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:32.367 13:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:32.367 13:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:32.367 13:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:32.367 13:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:32.367 13:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:32.367 13:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:32.367 13:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:32.367 13:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:32.367 13:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:32.626 13:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@300 -- # return 0 00:15:32.626 00:15:32.626 real 0m9.768s 00:15:32.626 user 0m18.052s 00:15:32.626 sys 0m2.169s 00:15:32.626 13:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:32.626 ************************************ 00:15:32.626 END TEST nvmf_host_discovery 00:15:32.626 13:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:32.626 ************************************ 00:15:32.626 13:54:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:15:32.626 13:54:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:32.626 13:54:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:32.626 13:54:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:32.626 ************************************ 00:15:32.626 START TEST nvmf_host_multipath_status 00:15:32.626 ************************************ 00:15:32.626 13:54:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:15:32.626 * Looking for test storage... 00:15:32.626 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:32.626 13:54:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:32.626 13:54:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:15:32.626 13:54:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:32.626 13:54:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:32.626 13:54:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:32.626 13:54:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:32.626 13:54:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:32.626 13:54:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:15:32.626 13:54:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:15:32.626 13:54:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:15:32.626 13:54:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:15:32.626 13:54:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:15:32.626 13:54:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:15:32.626 13:54:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:15:32.626 13:54:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:32.626 13:54:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:15:32.626 13:54:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:15:32.626 13:54:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:32.626 13:54:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:32.626 13:54:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:15:32.626 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:15:32.626 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:32.626 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:15:32.626 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:15:32.626 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:15:32.626 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:15:32.626 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:32.626 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:15:32.626 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:15:32.626 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:32.626 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:32.626 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:15:32.626 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:32.626 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:32.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:32.626 --rc genhtml_branch_coverage=1 00:15:32.626 --rc genhtml_function_coverage=1 00:15:32.626 --rc genhtml_legend=1 00:15:32.626 --rc geninfo_all_blocks=1 00:15:32.626 --rc geninfo_unexecuted_blocks=1 00:15:32.626 00:15:32.626 ' 00:15:32.626 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:32.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:32.626 --rc genhtml_branch_coverage=1 00:15:32.626 --rc genhtml_function_coverage=1 00:15:32.626 --rc genhtml_legend=1 00:15:32.626 --rc geninfo_all_blocks=1 00:15:32.626 --rc geninfo_unexecuted_blocks=1 00:15:32.626 00:15:32.626 ' 00:15:32.626 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:32.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:32.626 --rc genhtml_branch_coverage=1 00:15:32.626 --rc genhtml_function_coverage=1 00:15:32.626 --rc genhtml_legend=1 00:15:32.626 --rc geninfo_all_blocks=1 00:15:32.626 --rc geninfo_unexecuted_blocks=1 00:15:32.626 00:15:32.626 ' 00:15:32.626 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:32.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:32.626 --rc genhtml_branch_coverage=1 00:15:32.626 --rc genhtml_function_coverage=1 00:15:32.626 --rc genhtml_legend=1 00:15:32.626 --rc geninfo_all_blocks=1 00:15:32.626 --rc geninfo_unexecuted_blocks=1 00:15:32.626 00:15:32.626 ' 00:15:32.626 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:32.626 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:15:32.626 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:32.626 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:32.626 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:32.626 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:32.626 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:32.626 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:32.626 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:32.626 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:32.626 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:32.627 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:32.627 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:15:32.627 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=cfa2def7-c8af-457f-82a0-b312efdea7f4 00:15:32.627 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:32.627 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:32.627 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:32.627 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:32.627 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:32.627 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:15:32.886 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:32.886 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:32.886 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:32.886 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.886 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.886 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.886 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:15:32.887 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.887 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:15:32.887 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:32.887 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:32.887 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:32.887 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:32.887 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:32.887 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:32.887 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:32.887 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:32.887 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:32.887 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:32.887 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:32.887 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:32.887 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:32.887 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:15:32.887 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:32.887 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:15:32.887 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:15:32.887 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:32.887 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:32.887 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:32.887 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:32.887 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:32.887 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:32.887 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:32.887 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:32.887 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:32.887 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:32.887 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:32.887 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:32.887 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:32.887 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:32.887 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:32.887 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:32.887 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:32.887 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:32.887 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:32.887 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:32.887 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:32.887 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:32.887 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:32.887 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:32.887 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:32.887 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:32.887 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:32.887 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:32.887 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:32.887 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:32.887 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:32.887 Cannot find device "nvmf_init_br" 00:15:32.887 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:15:32.887 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:32.887 Cannot find device "nvmf_init_br2" 00:15:32.887 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:15:32.887 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:32.887 Cannot find device "nvmf_tgt_br" 00:15:32.887 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # true 00:15:32.887 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:32.887 Cannot find device "nvmf_tgt_br2" 00:15:32.887 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # true 00:15:32.887 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:32.887 Cannot find device "nvmf_init_br" 00:15:32.887 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # true 00:15:32.887 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:32.887 Cannot find device "nvmf_init_br2" 00:15:32.887 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # true 00:15:32.887 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:32.887 Cannot find device "nvmf_tgt_br" 00:15:32.887 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # true 00:15:32.887 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:32.887 Cannot find device "nvmf_tgt_br2" 00:15:32.887 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # true 00:15:32.887 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:32.887 Cannot find device "nvmf_br" 00:15:32.887 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # true 00:15:32.887 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:32.887 Cannot find device "nvmf_init_if" 00:15:32.887 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # true 00:15:32.887 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:32.887 Cannot find device "nvmf_init_if2" 00:15:32.887 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # true 00:15:32.887 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:32.887 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:32.887 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # true 00:15:32.887 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:32.887 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:32.887 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # true 00:15:32.887 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:32.887 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:32.887 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:32.888 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:32.888 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:32.888 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:32.888 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:32.888 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:32.888 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:32.888 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:32.888 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:33.146 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:33.146 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:33.146 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:33.146 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:33.146 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:33.146 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:33.146 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:33.146 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:33.146 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:33.146 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:33.146 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:33.146 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:33.146 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:33.146 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:33.146 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:33.146 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:33.146 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:33.146 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:33.146 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:33.146 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:33.147 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:33.147 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:33.147 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:33.147 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.106 ms 00:15:33.147 00:15:33.147 --- 10.0.0.3 ping statistics --- 00:15:33.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:33.147 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:15:33.147 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:33.147 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:33.147 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 00:15:33.147 00:15:33.147 --- 10.0.0.4 ping statistics --- 00:15:33.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:33.147 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:15:33.147 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:33.147 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:33.147 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:15:33.147 00:15:33.147 --- 10.0.0.1 ping statistics --- 00:15:33.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:33.147 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:15:33.147 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:33.147 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:33.147 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.118 ms 00:15:33.147 00:15:33.147 --- 10.0.0.2 ping statistics --- 00:15:33.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:33.147 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:15:33.147 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:33.147 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@461 -- # return 0 00:15:33.147 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:33.147 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:33.147 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:33.147 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:33.147 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:33.147 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:33.147 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:33.147 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:15:33.147 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:33.147 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:33.147 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:15:33.147 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=76331 00:15:33.147 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 76331 00:15:33.147 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:15:33.147 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 76331 ']' 00:15:33.147 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:33.147 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:33.147 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:33.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:33.147 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:33.147 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:15:33.405 [2024-12-06 13:54:32.555074] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:15:33.405 [2024-12-06 13:54:32.555187] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:33.405 [2024-12-06 13:54:32.707541] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:33.405 [2024-12-06 13:54:32.769813] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:33.405 [2024-12-06 13:54:32.769881] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:33.405 [2024-12-06 13:54:32.769903] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:33.405 [2024-12-06 13:54:32.769916] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:33.405 [2024-12-06 13:54:32.769925] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:33.405 [2024-12-06 13:54:32.771249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:33.405 [2024-12-06 13:54:32.771263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:33.664 [2024-12-06 13:54:32.830327] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:33.664 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:33.664 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:15:33.664 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:33.664 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:33.664 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:15:33.664 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:33.664 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=76331 00:15:33.664 13:54:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:33.923 [2024-12-06 13:54:33.242684] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:33.923 13:54:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:15:34.182 Malloc0 00:15:34.446 13:54:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:15:34.741 13:54:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:35.006 13:54:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:35.006 [2024-12-06 13:54:34.346672] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:35.006 13:54:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:15:35.266 [2024-12-06 13:54:34.578770] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:15:35.266 13:54:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=76379 00:15:35.266 13:54:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:15:35.266 13:54:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:35.266 13:54:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 76379 /var/tmp/bdevperf.sock 00:15:35.266 13:54:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 76379 ']' 00:15:35.266 13:54:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:35.266 13:54:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:35.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:35.266 13:54:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:35.266 13:54:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:35.266 13:54:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:15:36.642 13:54:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:36.642 13:54:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:15:36.642 13:54:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:15:36.642 13:54:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:15:37.209 Nvme0n1 00:15:37.209 13:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:15:37.468 Nvme0n1 00:15:37.468 13:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:15:37.468 13:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:15:39.368 13:54:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:15:39.368 13:54:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:15:39.627 13:54:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:15:39.886 13:54:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:15:40.821 13:54:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:15:40.821 13:54:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:15:40.821 13:54:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:40.821 13:54:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:41.390 13:54:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:41.390 13:54:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:15:41.390 13:54:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:41.390 13:54:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:41.390 13:54:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:41.390 13:54:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:41.390 13:54:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:41.390 13:54:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:41.649 13:54:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:41.649 13:54:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:41.649 13:54:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:41.649 13:54:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:41.907 13:54:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:41.907 13:54:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:41.907 13:54:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:41.907 13:54:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:42.166 13:54:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:42.166 13:54:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:15:42.166 13:54:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:42.166 13:54:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:42.425 13:54:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:42.425 13:54:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:15:42.425 13:54:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:15:42.684 13:54:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:15:42.943 13:54:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:15:44.322 13:54:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:15:44.322 13:54:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:15:44.322 13:54:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:44.322 13:54:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:44.322 13:54:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:44.322 13:54:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:15:44.322 13:54:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:44.322 13:54:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:44.581 13:54:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:44.581 13:54:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:44.581 13:54:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:44.581 13:54:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:44.840 13:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:44.840 13:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:44.840 13:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:44.840 13:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:45.099 13:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:45.099 13:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:45.099 13:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:45.099 13:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:45.358 13:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:45.358 13:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:15:45.358 13:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:45.358 13:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:45.617 13:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:45.617 13:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:15:45.617 13:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:15:45.876 13:54:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:15:46.135 13:54:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:15:47.509 13:54:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:15:47.509 13:54:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:15:47.509 13:54:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:47.509 13:54:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:47.509 13:54:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:47.509 13:54:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:15:47.509 13:54:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:47.509 13:54:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:47.767 13:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:47.767 13:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:47.767 13:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:47.767 13:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:48.026 13:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:48.026 13:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:48.026 13:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:48.026 13:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:48.286 13:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:48.286 13:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:48.286 13:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:48.286 13:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:48.545 13:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:48.545 13:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:15:48.545 13:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:48.545 13:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:48.805 13:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:48.805 13:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:15:48.805 13:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:15:49.064 13:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:15:49.323 13:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:15:50.290 13:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:15:50.290 13:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:15:50.290 13:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:50.290 13:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:50.549 13:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:50.549 13:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:15:50.549 13:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:50.549 13:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:50.808 13:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:50.808 13:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:50.808 13:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:50.808 13:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:51.067 13:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:51.067 13:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:51.067 13:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:51.067 13:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:51.327 13:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:51.327 13:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:51.327 13:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:51.327 13:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:51.587 13:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:51.587 13:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:15:51.587 13:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:51.587 13:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:51.847 13:54:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:51.847 13:54:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:15:51.847 13:54:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:15:52.107 13:54:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:15:52.367 13:54:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:15:53.306 13:54:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:15:53.306 13:54:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:15:53.306 13:54:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:53.306 13:54:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:53.565 13:54:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:53.565 13:54:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:15:53.565 13:54:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:53.566 13:54:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:53.825 13:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:53.825 13:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:53.825 13:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:53.825 13:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:54.083 13:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:54.083 13:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:54.083 13:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:54.083 13:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:54.343 13:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:54.343 13:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:15:54.343 13:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:54.343 13:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:54.602 13:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:54.602 13:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:15:54.602 13:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:54.602 13:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:55.170 13:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:55.170 13:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:15:55.171 13:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:15:55.171 13:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:15:55.430 13:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:15:56.803 13:54:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:15:56.803 13:54:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:15:56.803 13:54:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:56.803 13:54:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:56.803 13:54:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:56.803 13:54:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:15:56.803 13:54:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:56.804 13:54:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:57.061 13:54:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:57.061 13:54:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:57.061 13:54:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:57.061 13:54:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:57.627 13:54:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:57.627 13:54:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:57.627 13:54:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:57.627 13:54:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:57.627 13:54:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:57.627 13:54:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:15:57.627 13:54:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:57.627 13:54:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:57.886 13:54:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:57.886 13:54:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:15:57.886 13:54:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:57.886 13:54:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:58.145 13:54:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:58.145 13:54:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:15:58.404 13:54:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:15:58.404 13:54:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:15:58.662 13:54:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:15:58.920 13:54:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:16:00.299 13:54:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:16:00.299 13:54:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:00.299 13:54:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:00.299 13:54:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:00.299 13:54:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:00.299 13:54:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:00.299 13:54:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:00.299 13:54:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:00.559 13:54:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:00.559 13:54:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:00.559 13:54:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:00.559 13:54:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:00.818 13:55:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:00.818 13:55:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:00.818 13:55:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:00.818 13:55:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:01.076 13:55:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:01.076 13:55:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:01.076 13:55:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:01.076 13:55:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:01.645 13:55:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:01.645 13:55:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:01.645 13:55:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:01.645 13:55:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:01.645 13:55:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:01.645 13:55:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:16:01.645 13:55:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:02.214 13:55:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:16:02.473 13:55:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:16:03.410 13:55:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:16:03.410 13:55:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:03.410 13:55:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:03.410 13:55:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:03.669 13:55:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:03.669 13:55:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:03.669 13:55:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:03.669 13:55:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:04.263 13:55:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:04.263 13:55:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:04.263 13:55:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:04.263 13:55:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:04.522 13:55:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:04.523 13:55:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:04.523 13:55:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:04.523 13:55:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:04.797 13:55:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:04.797 13:55:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:04.797 13:55:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:04.797 13:55:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:05.059 13:55:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:05.059 13:55:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:05.059 13:55:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:05.059 13:55:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:05.317 13:55:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:05.317 13:55:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:16:05.317 13:55:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:05.575 13:55:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:16:05.834 13:55:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:16:07.212 13:55:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:16:07.212 13:55:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:07.212 13:55:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:07.212 13:55:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:07.212 13:55:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:07.212 13:55:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:07.212 13:55:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:07.212 13:55:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:07.471 13:55:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:07.471 13:55:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:07.471 13:55:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:07.471 13:55:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:07.731 13:55:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:07.731 13:55:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:07.731 13:55:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:07.731 13:55:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:07.990 13:55:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:07.990 13:55:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:07.990 13:55:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:07.990 13:55:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:08.249 13:55:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:08.249 13:55:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:08.249 13:55:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:08.249 13:55:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:08.509 13:55:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:08.509 13:55:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:16:08.509 13:55:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:08.768 13:55:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:16:09.027 13:55:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:16:09.965 13:55:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:16:09.965 13:55:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:09.965 13:55:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:09.965 13:55:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:10.225 13:55:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:10.225 13:55:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:10.225 13:55:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:10.225 13:55:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:10.485 13:55:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:10.485 13:55:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:10.485 13:55:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:10.485 13:55:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:11.054 13:55:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:11.054 13:55:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:11.054 13:55:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:11.054 13:55:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:11.312 13:55:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:11.312 13:55:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:11.312 13:55:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:11.312 13:55:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:11.571 13:55:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:11.571 13:55:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:16:11.571 13:55:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:11.571 13:55:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:11.836 13:55:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:11.836 13:55:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 76379 00:16:11.836 13:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 76379 ']' 00:16:11.836 13:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 76379 00:16:11.836 13:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:16:11.836 13:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:11.836 13:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76379 00:16:11.836 killing process with pid 76379 00:16:11.836 13:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:11.836 13:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:11.836 13:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76379' 00:16:11.836 13:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 76379 00:16:11.836 13:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 76379 00:16:11.836 { 00:16:11.836 "results": [ 00:16:11.836 { 00:16:11.836 "job": "Nvme0n1", 00:16:11.836 "core_mask": "0x4", 00:16:11.836 "workload": "verify", 00:16:11.836 "status": "terminated", 00:16:11.836 "verify_range": { 00:16:11.836 "start": 0, 00:16:11.836 "length": 16384 00:16:11.836 }, 00:16:11.836 "queue_depth": 128, 00:16:11.836 "io_size": 4096, 00:16:11.836 "runtime": 34.21809, 00:16:11.836 "iops": 9597.905669194277, 00:16:11.836 "mibps": 37.491819020290144, 00:16:11.836 "io_failed": 0, 00:16:11.836 "io_timeout": 0, 00:16:11.836 "avg_latency_us": 13308.11393740094, 00:16:11.836 "min_latency_us": 210.38545454545454, 00:16:11.836 "max_latency_us": 4026531.84 00:16:11.836 } 00:16:11.836 ], 00:16:11.836 "core_count": 1 00:16:11.836 } 00:16:11.836 13:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 76379 00:16:11.836 13:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:11.836 [2024-12-06 13:54:34.657571] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:16:11.836 [2024-12-06 13:54:34.657676] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76379 ] 00:16:11.836 [2024-12-06 13:54:34.810585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:11.836 [2024-12-06 13:54:34.883099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:11.836 [2024-12-06 13:54:34.954683] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:11.836 Running I/O for 90 seconds... 00:16:11.836 9504.00 IOPS, 37.12 MiB/s [2024-12-06T13:55:11.240Z] 9896.00 IOPS, 38.66 MiB/s [2024-12-06T13:55:11.240Z] 10184.00 IOPS, 39.78 MiB/s [2024-12-06T13:55:11.240Z] 10256.00 IOPS, 40.06 MiB/s [2024-12-06T13:55:11.240Z] 10382.40 IOPS, 40.56 MiB/s [2024-12-06T13:55:11.240Z] 10331.83 IOPS, 40.36 MiB/s [2024-12-06T13:55:11.240Z] 10393.86 IOPS, 40.60 MiB/s [2024-12-06T13:55:11.240Z] 10403.75 IOPS, 40.64 MiB/s [2024-12-06T13:55:11.240Z] 10363.00 IOPS, 40.48 MiB/s [2024-12-06T13:55:11.240Z] 10364.30 IOPS, 40.49 MiB/s [2024-12-06T13:55:11.240Z] 10388.64 IOPS, 40.58 MiB/s [2024-12-06T13:55:11.240Z] 10418.25 IOPS, 40.70 MiB/s [2024-12-06T13:55:11.240Z] 10474.08 IOPS, 40.91 MiB/s [2024-12-06T13:55:11.240Z] 10462.50 IOPS, 40.87 MiB/s [2024-12-06T13:55:11.240Z] [2024-12-06 13:54:51.399937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:41168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.836 [2024-12-06 13:54:51.400042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:16:11.836 [2024-12-06 13:54:51.400128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:41176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.836 [2024-12-06 13:54:51.400150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:16:11.836 [2024-12-06 13:54:51.400172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:41184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.836 [2024-12-06 13:54:51.400188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:16:11.836 [2024-12-06 13:54:51.400208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:41192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.836 [2024-12-06 13:54:51.400223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:16:11.836 [2024-12-06 13:54:51.400242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:41200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.836 [2024-12-06 13:54:51.400257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:16:11.836 [2024-12-06 13:54:51.400278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:41208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.836 [2024-12-06 13:54:51.400293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:16:11.836 [2024-12-06 13:54:51.400312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:41216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.836 [2024-12-06 13:54:51.400327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:16:11.836 [2024-12-06 13:54:51.400346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:41224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.836 [2024-12-06 13:54:51.400359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:11.836 [2024-12-06 13:54:51.400378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:41232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.836 [2024-12-06 13:54:51.400392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:11.836 [2024-12-06 13:54:51.400460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:41240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.836 [2024-12-06 13:54:51.400475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:16:11.836 [2024-12-06 13:54:51.400494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:41248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.836 [2024-12-06 13:54:51.400508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:16:11.836 [2024-12-06 13:54:51.400527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:41256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.836 [2024-12-06 13:54:51.400541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:16:11.836 [2024-12-06 13:54:51.400560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:41264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.836 [2024-12-06 13:54:51.400575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:16:11.836 [2024-12-06 13:54:51.400594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:41272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.836 [2024-12-06 13:54:51.400608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:16:11.836 [2024-12-06 13:54:51.400627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:40720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.836 [2024-12-06 13:54:51.400641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:16:11.836 [2024-12-06 13:54:51.400662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:40728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.836 [2024-12-06 13:54:51.400677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:16:11.836 [2024-12-06 13:54:51.400696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:40736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.837 [2024-12-06 13:54:51.400710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:11.837 [2024-12-06 13:54:51.400728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:40744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.837 [2024-12-06 13:54:51.400742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:16:11.837 [2024-12-06 13:54:51.400761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:40752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.837 [2024-12-06 13:54:51.400775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:16:11.837 [2024-12-06 13:54:51.400794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:40760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.837 [2024-12-06 13:54:51.400810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:16:11.837 [2024-12-06 13:54:51.400829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:40768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.837 [2024-12-06 13:54:51.400854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:16:11.837 [2024-12-06 13:54:51.400884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:40776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.837 [2024-12-06 13:54:51.400900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:16:11.837 [2024-12-06 13:54:51.400919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:41280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.837 [2024-12-06 13:54:51.400933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:16:11.837 [2024-12-06 13:54:51.400953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:41288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.837 [2024-12-06 13:54:51.400968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:16:11.837 [2024-12-06 13:54:51.401020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:41296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.837 [2024-12-06 13:54:51.401039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:16:11.837 [2024-12-06 13:54:51.401060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:41304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.837 [2024-12-06 13:54:51.401075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:16:11.837 [2024-12-06 13:54:51.401094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:41312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.837 [2024-12-06 13:54:51.401124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:16:11.837 [2024-12-06 13:54:51.401145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:41320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.837 [2024-12-06 13:54:51.401160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:16:11.837 [2024-12-06 13:54:51.401179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:41328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.837 [2024-12-06 13:54:51.401210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:16:11.837 [2024-12-06 13:54:51.401229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:41336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.837 [2024-12-06 13:54:51.401244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:16:11.837 [2024-12-06 13:54:51.401265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:41344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.837 [2024-12-06 13:54:51.401279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:16:11.837 [2024-12-06 13:54:51.401299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:41352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.837 [2024-12-06 13:54:51.401314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:16:11.837 [2024-12-06 13:54:51.401346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:40784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.837 [2024-12-06 13:54:51.401360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:16:11.837 [2024-12-06 13:54:51.401379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:40792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.837 [2024-12-06 13:54:51.401405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:16:11.837 [2024-12-06 13:54:51.401426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:40800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.837 [2024-12-06 13:54:51.401440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:16:11.837 [2024-12-06 13:54:51.401460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:40808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.837 [2024-12-06 13:54:51.401475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:16:11.837 [2024-12-06 13:54:51.401495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:40816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.837 [2024-12-06 13:54:51.401509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:16:11.837 [2024-12-06 13:54:51.401528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:40824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.837 [2024-12-06 13:54:51.401555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:16:11.837 [2024-12-06 13:54:51.401574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:40832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.837 [2024-12-06 13:54:51.401588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:16:11.837 [2024-12-06 13:54:51.401607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:40840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.837 [2024-12-06 13:54:51.401621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:11.837 [2024-12-06 13:54:51.401640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:40848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.837 [2024-12-06 13:54:51.401654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:11.837 [2024-12-06 13:54:51.401674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:40856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.837 [2024-12-06 13:54:51.401688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:16:11.837 [2024-12-06 13:54:51.401706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:40864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.837 [2024-12-06 13:54:51.401720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:16:11.837 [2024-12-06 13:54:51.401739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:40872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.837 [2024-12-06 13:54:51.401754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:16:11.837 [2024-12-06 13:54:51.401773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:40880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.837 [2024-12-06 13:54:51.401788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:16:11.837 [2024-12-06 13:54:51.401807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:40888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.837 [2024-12-06 13:54:51.401828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:16:11.837 [2024-12-06 13:54:51.401848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:40896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.837 [2024-12-06 13:54:51.401863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:16:11.837 [2024-12-06 13:54:51.401901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:40904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.837 [2024-12-06 13:54:51.401916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:16:11.837 [2024-12-06 13:54:51.401939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:41360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.837 [2024-12-06 13:54:51.401954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:11.837 [2024-12-06 13:54:51.401973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:41368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.837 [2024-12-06 13:54:51.401988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:16:11.837 [2024-12-06 13:54:51.402007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:41376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.837 [2024-12-06 13:54:51.402022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:16:11.837 [2024-12-06 13:54:51.402042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:41384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.837 [2024-12-06 13:54:51.402056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:16:11.837 [2024-12-06 13:54:51.402075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:41392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.837 [2024-12-06 13:54:51.402089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:16:11.837 [2024-12-06 13:54:51.402107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:41400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.837 [2024-12-06 13:54:51.402131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:16:11.837 [2024-12-06 13:54:51.402152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:41408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.837 [2024-12-06 13:54:51.402185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:16:11.838 [2024-12-06 13:54:51.402227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:41416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.838 [2024-12-06 13:54:51.402241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:16:11.838 [2024-12-06 13:54:51.402260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:41424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.838 [2024-12-06 13:54:51.402275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:16:11.838 [2024-12-06 13:54:51.402294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:41432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.838 [2024-12-06 13:54:51.402308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:16:11.838 [2024-12-06 13:54:51.402336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:41440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.838 [2024-12-06 13:54:51.402350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:16:11.838 [2024-12-06 13:54:51.402369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:41448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.838 [2024-12-06 13:54:51.402383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:16:11.838 [2024-12-06 13:54:51.402404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:41456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.838 [2024-12-06 13:54:51.402418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:16:11.838 [2024-12-06 13:54:51.402436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:41464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.838 [2024-12-06 13:54:51.402450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:16:11.838 [2024-12-06 13:54:51.402480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:41472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.838 [2024-12-06 13:54:51.402494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:16:11.838 [2024-12-06 13:54:51.402514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:41480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.838 [2024-12-06 13:54:51.402528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:16:11.838 [2024-12-06 13:54:51.402547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:41488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.838 [2024-12-06 13:54:51.402561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:16:11.838 [2024-12-06 13:54:51.402580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:41496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.838 [2024-12-06 13:54:51.402594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:16:11.838 [2024-12-06 13:54:51.402630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:41504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.838 [2024-12-06 13:54:51.402644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:16:11.838 [2024-12-06 13:54:51.402663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:41512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.838 [2024-12-06 13:54:51.402678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:16:11.838 [2024-12-06 13:54:51.402696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:41520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.838 [2024-12-06 13:54:51.402711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:11.838 [2024-12-06 13:54:51.402730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:41528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.838 [2024-12-06 13:54:51.402745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:11.838 [2024-12-06 13:54:51.402772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:41536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.838 [2024-12-06 13:54:51.402788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:11.838 [2024-12-06 13:54:51.402807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:41544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.838 [2024-12-06 13:54:51.402821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:11.838 [2024-12-06 13:54:51.402840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:40912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.838 [2024-12-06 13:54:51.402855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:11.838 [2024-12-06 13:54:51.402874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:40920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.838 [2024-12-06 13:54:51.402888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:16:11.838 [2024-12-06 13:54:51.402914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:40928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.838 [2024-12-06 13:54:51.402944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:16:11.838 [2024-12-06 13:54:51.402963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:40936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.838 [2024-12-06 13:54:51.402982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:16:11.838 [2024-12-06 13:54:51.403001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:40944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.838 [2024-12-06 13:54:51.403015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:16:11.838 [2024-12-06 13:54:51.403033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:40952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.838 [2024-12-06 13:54:51.403047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:16:11.838 [2024-12-06 13:54:51.403065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:40960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.838 [2024-12-06 13:54:51.403080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:16:11.838 [2024-12-06 13:54:51.403098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:40968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.838 [2024-12-06 13:54:51.403123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:16:11.838 [2024-12-06 13:54:51.403153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:40976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.838 [2024-12-06 13:54:51.403168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:11.838 [2024-12-06 13:54:51.403187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:40984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.838 [2024-12-06 13:54:51.403201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:16:11.838 [2024-12-06 13:54:51.403219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.838 [2024-12-06 13:54:51.403240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:16:11.838 [2024-12-06 13:54:51.403260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:41000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.838 [2024-12-06 13:54:51.403275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:16:11.838 [2024-12-06 13:54:51.403294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:41008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.838 [2024-12-06 13:54:51.403308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:16:11.838 [2024-12-06 13:54:51.403327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:41016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.838 [2024-12-06 13:54:51.403351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:16:11.838 [2024-12-06 13:54:51.403371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:41024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.838 [2024-12-06 13:54:51.403410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:16:11.838 [2024-12-06 13:54:51.403435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:41032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.838 [2024-12-06 13:54:51.403450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:16:11.838 [2024-12-06 13:54:51.403474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:41552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.838 [2024-12-06 13:54:51.403490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:16:11.838 [2024-12-06 13:54:51.403510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:41560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.838 [2024-12-06 13:54:51.403536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:16:11.838 [2024-12-06 13:54:51.403557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:41568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.838 [2024-12-06 13:54:51.403572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:16:11.838 [2024-12-06 13:54:51.403591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:41576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.838 [2024-12-06 13:54:51.403606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:16:11.838 [2024-12-06 13:54:51.403637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:41584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.838 [2024-12-06 13:54:51.403652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:16:11.838 [2024-12-06 13:54:51.403672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:41592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.839 [2024-12-06 13:54:51.403687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:16:11.839 [2024-12-06 13:54:51.403718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:41600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.839 [2024-12-06 13:54:51.403742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:16:11.839 [2024-12-06 13:54:51.403763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:41608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.839 [2024-12-06 13:54:51.403778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:16:11.839 [2024-12-06 13:54:51.403798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:41616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.839 [2024-12-06 13:54:51.403813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:16:11.839 [2024-12-06 13:54:51.403846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:41624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.839 [2024-12-06 13:54:51.403861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:16:11.839 [2024-12-06 13:54:51.403880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:41632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.839 [2024-12-06 13:54:51.403895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:16:11.839 [2024-12-06 13:54:51.403914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:41640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.839 [2024-12-06 13:54:51.403929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:16:11.839 [2024-12-06 13:54:51.403948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:41648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.839 [2024-12-06 13:54:51.403962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:16:11.839 [2024-12-06 13:54:51.403982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:41656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.839 [2024-12-06 13:54:51.404011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:16:11.839 [2024-12-06 13:54:51.404031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:41040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.839 [2024-12-06 13:54:51.404046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:16:11.839 [2024-12-06 13:54:51.404065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:41048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.839 [2024-12-06 13:54:51.404080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:11.839 [2024-12-06 13:54:51.404100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:41056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.839 [2024-12-06 13:54:51.404114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:11.839 [2024-12-06 13:54:51.404155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:41064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.839 [2024-12-06 13:54:51.404172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:11.839 [2024-12-06 13:54:51.404192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:41072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.839 [2024-12-06 13:54:51.404206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:11.839 [2024-12-06 13:54:51.404235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:41080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.839 [2024-12-06 13:54:51.404250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:11.839 [2024-12-06 13:54:51.404271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:41088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.839 [2024-12-06 13:54:51.404287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:11.839 [2024-12-06 13:54:51.404930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:41096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.839 [2024-12-06 13:54:51.404955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:11.839 [2024-12-06 13:54:51.404994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:41664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.839 [2024-12-06 13:54:51.405010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:11.839 [2024-12-06 13:54:51.405036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:41672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.839 [2024-12-06 13:54:51.405051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:16:11.839 [2024-12-06 13:54:51.405077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:41680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.839 [2024-12-06 13:54:51.405092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:11.839 [2024-12-06 13:54:51.405130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:41688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.839 [2024-12-06 13:54:51.405147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:16:11.839 [2024-12-06 13:54:51.405182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:41696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.839 [2024-12-06 13:54:51.405197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:11.839 [2024-12-06 13:54:51.405222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:41704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.839 [2024-12-06 13:54:51.405237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:11.839 [2024-12-06 13:54:51.405262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:41712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.839 [2024-12-06 13:54:51.405279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:11.839 [2024-12-06 13:54:51.405320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:41720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.839 [2024-12-06 13:54:51.405345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:11.839 [2024-12-06 13:54:51.405372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:41728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.839 [2024-12-06 13:54:51.405387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:16:11.839 [2024-12-06 13:54:51.405423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:41736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.839 [2024-12-06 13:54:51.405440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:16:11.839 [2024-12-06 13:54:51.405465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:41104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.839 [2024-12-06 13:54:51.405480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:16:11.839 [2024-12-06 13:54:51.405505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:41112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.839 [2024-12-06 13:54:51.405519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:16:11.839 [2024-12-06 13:54:51.405544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:41120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.839 [2024-12-06 13:54:51.405559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:16:11.839 [2024-12-06 13:54:51.405583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:41128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.839 [2024-12-06 13:54:51.405598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:16:11.839 [2024-12-06 13:54:51.405634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:41136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.839 [2024-12-06 13:54:51.405649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:16:11.839 [2024-12-06 13:54:51.405674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:41144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.839 [2024-12-06 13:54:51.405689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:16:11.839 [2024-12-06 13:54:51.405714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:41152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.839 [2024-12-06 13:54:51.405729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:16:11.839 [2024-12-06 13:54:51.405754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:41160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.839 [2024-12-06 13:54:51.405769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:16:11.839 10169.73 IOPS, 39.73 MiB/s [2024-12-06T13:55:11.243Z] 9534.12 IOPS, 37.24 MiB/s [2024-12-06T13:55:11.243Z] 8973.29 IOPS, 35.05 MiB/s [2024-12-06T13:55:11.243Z] 8474.78 IOPS, 33.10 MiB/s [2024-12-06T13:55:11.243Z] 8257.37 IOPS, 32.26 MiB/s [2024-12-06T13:55:11.243Z] 8326.50 IOPS, 32.53 MiB/s [2024-12-06T13:55:11.243Z] 8429.81 IOPS, 32.93 MiB/s [2024-12-06T13:55:11.243Z] 8665.09 IOPS, 33.85 MiB/s [2024-12-06T13:55:11.243Z] 8865.13 IOPS, 34.63 MiB/s [2024-12-06T13:55:11.243Z] 9004.50 IOPS, 35.17 MiB/s [2024-12-06T13:55:11.243Z] 9055.20 IOPS, 35.37 MiB/s [2024-12-06T13:55:11.243Z] 9060.46 IOPS, 35.39 MiB/s [2024-12-06T13:55:11.243Z] 9067.37 IOPS, 35.42 MiB/s [2024-12-06T13:55:11.243Z] 9066.36 IOPS, 35.42 MiB/s [2024-12-06T13:55:11.243Z] 9176.34 IOPS, 35.85 MiB/s [2024-12-06T13:55:11.243Z] 9333.37 IOPS, 36.46 MiB/s [2024-12-06T13:55:11.243Z] 9475.61 IOPS, 37.01 MiB/s [2024-12-06T13:55:11.243Z] [2024-12-06 13:55:08.276397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:39888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.839 [2024-12-06 13:55:08.276462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:16:11.840 [2024-12-06 13:55:08.276524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:39904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.840 [2024-12-06 13:55:08.276567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:16:11.840 [2024-12-06 13:55:08.276588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:39920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.840 [2024-12-06 13:55:08.276602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:16:11.840 [2024-12-06 13:55:08.276621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:39936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.840 [2024-12-06 13:55:08.276634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:16:11.840 [2024-12-06 13:55:08.276652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:39952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.840 [2024-12-06 13:55:08.276664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:16:11.840 [2024-12-06 13:55:08.276682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:39968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.840 [2024-12-06 13:55:08.276695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:16:11.840 [2024-12-06 13:55:08.276713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:39544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.840 [2024-12-06 13:55:08.276726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:16:11.840 [2024-12-06 13:55:08.276743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:39576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.840 [2024-12-06 13:55:08.276756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:16:11.840 [2024-12-06 13:55:08.276774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:39608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.840 [2024-12-06 13:55:08.276786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:16:11.840 [2024-12-06 13:55:08.276804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:39648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.840 [2024-12-06 13:55:08.276816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:16:11.840 [2024-12-06 13:55:08.276834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.840 [2024-12-06 13:55:08.276847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:16:11.840 [2024-12-06 13:55:08.276864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:40008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.840 [2024-12-06 13:55:08.276877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:16:11.840 [2024-12-06 13:55:08.276894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:40024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.840 [2024-12-06 13:55:08.276907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:16:11.840 [2024-12-06 13:55:08.276924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:39664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.840 [2024-12-06 13:55:08.276937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:16:11.840 [2024-12-06 13:55:08.276964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:39696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.840 [2024-12-06 13:55:08.276978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:16:11.840 [2024-12-06 13:55:08.276996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:39480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.840 [2024-12-06 13:55:08.277009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:16:11.840 [2024-12-06 13:55:08.277026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:39512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.840 [2024-12-06 13:55:08.277041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:16:11.840 [2024-12-06 13:55:08.277059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:40040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.840 [2024-12-06 13:55:08.277072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:16:11.840 [2024-12-06 13:55:08.277090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:40056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.840 [2024-12-06 13:55:08.277103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:16:11.840 [2024-12-06 13:55:08.277152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:40072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.840 [2024-12-06 13:55:08.277180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:16:11.840 [2024-12-06 13:55:08.277201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:40088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.840 [2024-12-06 13:55:08.277215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:11.840 [2024-12-06 13:55:08.277234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:40104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.840 [2024-12-06 13:55:08.277247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:11.840 [2024-12-06 13:55:08.277266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:40120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.840 [2024-12-06 13:55:08.277280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:16:11.840 [2024-12-06 13:55:08.277298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:40136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.840 [2024-12-06 13:55:08.277312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:16:11.840 [2024-12-06 13:55:08.277331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:39552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.840 [2024-12-06 13:55:08.277344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:16:11.840 [2024-12-06 13:55:08.277363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:39584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.840 [2024-12-06 13:55:08.277377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:16:11.840 [2024-12-06 13:55:08.277404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:39616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.840 [2024-12-06 13:55:08.277419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:16:11.840 [2024-12-06 13:55:08.277438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.840 [2024-12-06 13:55:08.277466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:16:11.840 [2024-12-06 13:55:08.277484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:40168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.840 [2024-12-06 13:55:08.277497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:16:11.840 [2024-12-06 13:55:08.277516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:39728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.840 [2024-12-06 13:55:08.277530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:11.840 [2024-12-06 13:55:08.277548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:39760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.840 [2024-12-06 13:55:08.277562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:16:11.840 [2024-12-06 13:55:08.277580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.840 [2024-12-06 13:55:08.277593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:16:11.840 [2024-12-06 13:55:08.277612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.840 [2024-12-06 13:55:08.277625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:16:11.840 [2024-12-06 13:55:08.277643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:40192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.840 [2024-12-06 13:55:08.277657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:16:11.840 [2024-12-06 13:55:08.277675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:40208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.840 [2024-12-06 13:55:08.277689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:16:11.841 [2024-12-06 13:55:08.277707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:39640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.841 [2024-12-06 13:55:08.277721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:16:11.841 [2024-12-06 13:55:08.277739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:39672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.841 [2024-12-06 13:55:08.277752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:16:11.841 [2024-12-06 13:55:08.277770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:39704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.841 [2024-12-06 13:55:08.277783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:16:11.841 [2024-12-06 13:55:08.277808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:40232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.841 [2024-12-06 13:55:08.277823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:16:11.841 [2024-12-06 13:55:08.277842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:39816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.841 [2024-12-06 13:55:08.277856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:16:11.841 [2024-12-06 13:55:08.277874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:40256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.841 [2024-12-06 13:55:08.277887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:16:11.841 [2024-12-06 13:55:08.277905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:40272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.841 [2024-12-06 13:55:08.277919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:16:11.841 [2024-12-06 13:55:08.277937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:40288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.841 [2024-12-06 13:55:08.277950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:16:11.841 [2024-12-06 13:55:08.277985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:39832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.841 [2024-12-06 13:55:08.277998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:16:11.841 [2024-12-06 13:55:08.278017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:39864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.841 [2024-12-06 13:55:08.278030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:16:11.841 [2024-12-06 13:55:08.278049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:39720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.841 [2024-12-06 13:55:08.278079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:16:11.841 [2024-12-06 13:55:08.278099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:39752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.841 [2024-12-06 13:55:08.278129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:16:11.841 [2024-12-06 13:55:08.278150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:39784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.841 [2024-12-06 13:55:08.278177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:16:11.841 [2024-12-06 13:55:08.278199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:39808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.841 [2024-12-06 13:55:08.278214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:16:11.841 [2024-12-06 13:55:08.278234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:40312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.841 [2024-12-06 13:55:08.278249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:11.841 [2024-12-06 13:55:08.278269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:40328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.841 [2024-12-06 13:55:08.278293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:11.841 [2024-12-06 13:55:08.278314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:40344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.841 [2024-12-06 13:55:08.278329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:11.841 [2024-12-06 13:55:08.278349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:39824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.841 [2024-12-06 13:55:08.278363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:11.841 [2024-12-06 13:55:08.278384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:39856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.841 [2024-12-06 13:55:08.278398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:11.841 [2024-12-06 13:55:08.279759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:40360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.841 [2024-12-06 13:55:08.279789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:16:11.841 [2024-12-06 13:55:08.279816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.841 [2024-12-06 13:55:08.279833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:16:11.841 [2024-12-06 13:55:08.279854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:40392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.841 [2024-12-06 13:55:08.279868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:16:11.841 [2024-12-06 13:55:08.279888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:40408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.841 [2024-12-06 13:55:08.279932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:16:11.841 [2024-12-06 13:55:08.279951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:40424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.841 [2024-12-06 13:55:08.279965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:16:11.841 [2024-12-06 13:55:08.279984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:39896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.841 [2024-12-06 13:55:08.279997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:16:11.841 [2024-12-06 13:55:08.280016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:39928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.841 [2024-12-06 13:55:08.280030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:16:11.841 [2024-12-06 13:55:08.280057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:39960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.841 [2024-12-06 13:55:08.280071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:11.841 [2024-12-06 13:55:08.280091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:39984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.841 [2024-12-06 13:55:08.280116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:16:11.841 [2024-12-06 13:55:08.280137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.841 [2024-12-06 13:55:08.280152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:16:11.841 [2024-12-06 13:55:08.280186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:40000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.841 [2024-12-06 13:55:08.280201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:16:11.841 [2024-12-06 13:55:08.280219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:40032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.841 [2024-12-06 13:55:08.280233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:16:11.841 [2024-12-06 13:55:08.280253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:40472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.841 [2024-12-06 13:55:08.280268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:16:11.841 [2024-12-06 13:55:08.280290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:40488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.841 [2024-12-06 13:55:08.280305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:16:11.841 [2024-12-06 13:55:08.280325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:40504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.841 [2024-12-06 13:55:08.280339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:16:11.841 [2024-12-06 13:55:08.280357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:40520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.841 [2024-12-06 13:55:08.280371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:16:11.841 [2024-12-06 13:55:08.280390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.841 [2024-12-06 13:55:08.280404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:16:11.841 [2024-12-06 13:55:08.280423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:39904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.841 [2024-12-06 13:55:08.280437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:16:11.841 [2024-12-06 13:55:08.280456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:39936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.841 [2024-12-06 13:55:08.280470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:16:11.841 [2024-12-06 13:55:08.280489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:39968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.841 [2024-12-06 13:55:08.280503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:16:11.842 [2024-12-06 13:55:08.280522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:39576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.842 [2024-12-06 13:55:08.280535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:16:11.842 [2024-12-06 13:55:08.280564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:39648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.842 [2024-12-06 13:55:08.280578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:16:11.842 [2024-12-06 13:55:08.280630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:40008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.842 [2024-12-06 13:55:08.280645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:16:11.842 [2024-12-06 13:55:08.280665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:39664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.842 [2024-12-06 13:55:08.280679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:16:11.842 [2024-12-06 13:55:08.280707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:39480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.842 [2024-12-06 13:55:08.280723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:16:11.842 [2024-12-06 13:55:08.282557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:40040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.842 [2024-12-06 13:55:08.282596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:16:11.842 [2024-12-06 13:55:08.282691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:40072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.842 [2024-12-06 13:55:08.282744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:16:11.842 [2024-12-06 13:55:08.282777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.842 [2024-12-06 13:55:08.282800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:16:11.842 [2024-12-06 13:55:08.282828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:40136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.842 [2024-12-06 13:55:08.282849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:16:11.842 [2024-12-06 13:55:08.282877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:39584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.842 [2024-12-06 13:55:08.282898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:16:11.842 [2024-12-06 13:55:08.282926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:40152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.842 [2024-12-06 13:55:08.282946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:11.842 [2024-12-06 13:55:08.282974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:39728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.842 [2024-12-06 13:55:08.282995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:11.842 [2024-12-06 13:55:08.283023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:39792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.842 [2024-12-06 13:55:08.283044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:11.842 [2024-12-06 13:55:08.283095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:40192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.842 [2024-12-06 13:55:08.283117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:11.842 [2024-12-06 13:55:08.283174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:39640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.842 [2024-12-06 13:55:08.283199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:11.842 [2024-12-06 13:55:08.283228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:39704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.842 [2024-12-06 13:55:08.283249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:11.842 [2024-12-06 13:55:08.283278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:39816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.842 [2024-12-06 13:55:08.283300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:11.842 [2024-12-06 13:55:08.283336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:40272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.842 [2024-12-06 13:55:08.283363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:11.842 [2024-12-06 13:55:08.283398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:39832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.842 [2024-12-06 13:55:08.283417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:16:11.842 [2024-12-06 13:55:08.283438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.842 [2024-12-06 13:55:08.283453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:11.842 [2024-12-06 13:55:08.283481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:39784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.842 [2024-12-06 13:55:08.283497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:16:11.842 [2024-12-06 13:55:08.283518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:40312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.842 [2024-12-06 13:55:08.283533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:11.842 [2024-12-06 13:55:08.283554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:40344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.842 [2024-12-06 13:55:08.283570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:11.842 [2024-12-06 13:55:08.283591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:39856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.842 [2024-12-06 13:55:08.283606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:11.842 [2024-12-06 13:55:08.283627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:40064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.842 [2024-12-06 13:55:08.283643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:11.842 [2024-12-06 13:55:08.283664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:40096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.842 [2024-12-06 13:55:08.283695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:16:11.842 [2024-12-06 13:55:08.283717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.842 [2024-12-06 13:55:08.283733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:16:11.842 [2024-12-06 13:55:08.283754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:40560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.842 [2024-12-06 13:55:08.283769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:16:11.842 [2024-12-06 13:55:08.283790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:40128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.842 [2024-12-06 13:55:08.283805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:16:11.842 [2024-12-06 13:55:08.283826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:40160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.842 [2024-12-06 13:55:08.283842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:16:11.842 [2024-12-06 13:55:08.283863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:40200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.842 [2024-12-06 13:55:08.283878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:16:11.842 [2024-12-06 13:55:08.283899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:40224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.842 [2024-12-06 13:55:08.283914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:16:11.842 [2024-12-06 13:55:08.283935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:40576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.842 [2024-12-06 13:55:08.283952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:16:11.842 [2024-12-06 13:55:08.283996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:40592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.842 [2024-12-06 13:55:08.284012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:16:11.842 [2024-12-06 13:55:08.284032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:40608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.842 [2024-12-06 13:55:08.284047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:16:11.842 [2024-12-06 13:55:08.284081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:40624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.842 [2024-12-06 13:55:08.284095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:16:11.842 [2024-12-06 13:55:08.284119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:40640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.842 [2024-12-06 13:55:08.284146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:16:11.842 [2024-12-06 13:55:08.284168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:40264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.842 [2024-12-06 13:55:08.284190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:16:11.842 [2024-12-06 13:55:08.284212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:40296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.842 [2024-12-06 13:55:08.284227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:16:11.843 [2024-12-06 13:55:08.284247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:40376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.843 [2024-12-06 13:55:08.284261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:16:11.843 [2024-12-06 13:55:08.284296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:40408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.843 [2024-12-06 13:55:08.284309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:16:11.843 [2024-12-06 13:55:08.284344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:39896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.843 [2024-12-06 13:55:08.284357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:16:11.843 [2024-12-06 13:55:08.284376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:39960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.843 [2024-12-06 13:55:08.284390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:11.843 [2024-12-06 13:55:08.284408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:40448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.843 [2024-12-06 13:55:08.284422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:11.843 [2024-12-06 13:55:08.284440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:40032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.843 [2024-12-06 13:55:08.284454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:16:11.843 [2024-12-06 13:55:08.284472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.843 [2024-12-06 13:55:08.284512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:16:11.843 [2024-12-06 13:55:08.284547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:40520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.843 [2024-12-06 13:55:08.284603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:16:11.843 [2024-12-06 13:55:08.284633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:39904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.843 [2024-12-06 13:55:08.284648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:16:11.843 [2024-12-06 13:55:08.284669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:39968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.843 [2024-12-06 13:55:08.284685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:16:11.843 [2024-12-06 13:55:08.284714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:39648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.843 [2024-12-06 13:55:08.284729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:16:11.843 [2024-12-06 13:55:08.284759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:39664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.843 [2024-12-06 13:55:08.284776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:16:11.843 [2024-12-06 13:55:08.286058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:40304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.843 [2024-12-06 13:55:08.286085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:11.843 [2024-12-06 13:55:08.286116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:40336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.843 [2024-12-06 13:55:08.286133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:16:11.843 [2024-12-06 13:55:08.286152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.843 [2024-12-06 13:55:08.286180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:16:11.843 [2024-12-06 13:55:08.286210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:40672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.843 [2024-12-06 13:55:08.286264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:16:11.843 [2024-12-06 13:55:08.286317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:40352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.843 [2024-12-06 13:55:08.286332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:16:11.843 [2024-12-06 13:55:08.286369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:40696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.843 [2024-12-06 13:55:08.286385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:16:11.843 [2024-12-06 13:55:08.286405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:40384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.843 [2024-12-06 13:55:08.286420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:16:11.843 [2024-12-06 13:55:08.286442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:40416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.843 [2024-12-06 13:55:08.286457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:16:11.843 [2024-12-06 13:55:08.286478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:40440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.843 [2024-12-06 13:55:08.286493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:16:11.843 [2024-12-06 13:55:08.286514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:40704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.843 [2024-12-06 13:55:08.286529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:16:11.843 [2024-12-06 13:55:08.286560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:40720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.843 [2024-12-06 13:55:08.286575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:16:11.843 [2024-12-06 13:55:08.286609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:40464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.843 [2024-12-06 13:55:08.286631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:16:11.843 [2024-12-06 13:55:08.286662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:40496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.843 [2024-12-06 13:55:08.286682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:16:11.843 [2024-12-06 13:55:08.286710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:40528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.843 [2024-12-06 13:55:08.286731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:16:11.843 [2024-12-06 13:55:08.286764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:39920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.843 [2024-12-06 13:55:08.286785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:16:11.843 [2024-12-06 13:55:08.286812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:40072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.843 [2024-12-06 13:55:08.286833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:16:11.843 [2024-12-06 13:55:08.286861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:40136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.843 [2024-12-06 13:55:08.286882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:16:11.843 [2024-12-06 13:55:08.287403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:40152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.843 [2024-12-06 13:55:08.287438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:16:11.843 [2024-12-06 13:55:08.287472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:39792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.843 [2024-12-06 13:55:08.287495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:16:11.843 [2024-12-06 13:55:08.287523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:39640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.843 [2024-12-06 13:55:08.287545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:16:11.843 [2024-12-06 13:55:08.287572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:39816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.843 [2024-12-06 13:55:08.287593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:16:11.843 [2024-12-06 13:55:08.287620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:39832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.843 [2024-12-06 13:55:08.287641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:16:11.843 [2024-12-06 13:55:08.287669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:39784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.843 [2024-12-06 13:55:08.287690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:16:11.843 [2024-12-06 13:55:08.287717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:40344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.843 [2024-12-06 13:55:08.287769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:11.843 [2024-12-06 13:55:08.287800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:40064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.843 [2024-12-06 13:55:08.287822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:11.843 [2024-12-06 13:55:08.287850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:40544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.843 [2024-12-06 13:55:08.287870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:16:11.843 [2024-12-06 13:55:08.287897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:40128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.843 [2024-12-06 13:55:08.287918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:16:11.844 [2024-12-06 13:55:08.287945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:40200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.844 [2024-12-06 13:55:08.287966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:16:11.844 [2024-12-06 13:55:08.287993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:40576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.844 [2024-12-06 13:55:08.288014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:16:11.844 [2024-12-06 13:55:08.288041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:40608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.844 [2024-12-06 13:55:08.288062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:16:11.844 [2024-12-06 13:55:08.288090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:40640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.844 [2024-12-06 13:55:08.288127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:16:11.844 [2024-12-06 13:55:08.288156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:40296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.844 [2024-12-06 13:55:08.288177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:16:11.844 [2024-12-06 13:55:08.288204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:40408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.844 [2024-12-06 13:55:08.288225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:11.844 [2024-12-06 13:55:08.288259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:39960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.844 [2024-12-06 13:55:08.288280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:16:11.844 [2024-12-06 13:55:08.288308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:40032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.844 [2024-12-06 13:55:08.288328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:16:11.844 [2024-12-06 13:55:08.288356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:40520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.844 [2024-12-06 13:55:08.288389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:16:11.844 [2024-12-06 13:55:08.288437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:39968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.844 [2024-12-06 13:55:08.288463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:16:11.844 [2024-12-06 13:55:08.288492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:39664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.844 [2024-12-06 13:55:08.288513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:16:11.844 [2024-12-06 13:55:08.288541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:40744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.844 [2024-12-06 13:55:08.288562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:16:11.844 [2024-12-06 13:55:08.288590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:40024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.844 [2024-12-06 13:55:08.288610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:16:11.844 [2024-12-06 13:55:08.288638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:40088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.844 [2024-12-06 13:55:08.288659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:16:11.844 [2024-12-06 13:55:08.288686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:40168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.844 [2024-12-06 13:55:08.288707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:16:11.844 [2024-12-06 13:55:08.288735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:40208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.844 [2024-12-06 13:55:08.288755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:16:11.844 [2024-12-06 13:55:08.288783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:40760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.844 [2024-12-06 13:55:08.288804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:16:11.844 [2024-12-06 13:55:08.288831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:40776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.844 [2024-12-06 13:55:08.288851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:16:11.844 [2024-12-06 13:55:08.288879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:40792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.844 [2024-12-06 13:55:08.288899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:16:11.844 [2024-12-06 13:55:08.288927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:40256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.844 [2024-12-06 13:55:08.288957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:16:11.844 [2024-12-06 13:55:08.288985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:40336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.844 [2024-12-06 13:55:08.289012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:16:11.844 [2024-12-06 13:55:08.289065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:40672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.844 [2024-12-06 13:55:08.289083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:16:11.844 [2024-12-06 13:55:08.289125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.844 [2024-12-06 13:55:08.289144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:16:11.844 [2024-12-06 13:55:08.289165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:40416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.844 [2024-12-06 13:55:08.289181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:16:11.844 [2024-12-06 13:55:08.289202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:40704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.844 [2024-12-06 13:55:08.289218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:16:11.844 [2024-12-06 13:55:08.289239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:40464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.844 [2024-12-06 13:55:08.289254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:11.844 [2024-12-06 13:55:08.289275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:40528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.844 [2024-12-06 13:55:08.289291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:11.844 [2024-12-06 13:55:08.289312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:40072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.844 [2024-12-06 13:55:08.289328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:11.844 [2024-12-06 13:55:08.290783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:40328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.844 [2024-12-06 13:55:08.290809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:11.844 [2024-12-06 13:55:08.290834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:40568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.844 [2024-12-06 13:55:08.290849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:11.844 [2024-12-06 13:55:08.290869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:39792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.844 [2024-12-06 13:55:08.290883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:16:11.844 [2024-12-06 13:55:08.290902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:39816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.844 [2024-12-06 13:55:08.290916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:16:11.844 [2024-12-06 13:55:08.290935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:39784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.844 [2024-12-06 13:55:08.290948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:16:11.844 [2024-12-06 13:55:08.290980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:40064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.844 [2024-12-06 13:55:08.290995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:16:11.844 [2024-12-06 13:55:08.291014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:40128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.844 [2024-12-06 13:55:08.291028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:16:11.844 [2024-12-06 13:55:08.291046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.844 [2024-12-06 13:55:08.291060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:16:11.844 [2024-12-06 13:55:08.291078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:40640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.844 [2024-12-06 13:55:08.291092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:16:11.844 [2024-12-06 13:55:08.291110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:40408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.844 [2024-12-06 13:55:08.291124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:11.844 [2024-12-06 13:55:08.291157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:40032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.844 [2024-12-06 13:55:08.291174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:16:11.845 [2024-12-06 13:55:08.291193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.845 [2024-12-06 13:55:08.291207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:16:11.845 [2024-12-06 13:55:08.291226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:40744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.845 [2024-12-06 13:55:08.291239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:16:11.845 [2024-12-06 13:55:08.291258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:40088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.845 [2024-12-06 13:55:08.291272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:16:11.845 [2024-12-06 13:55:08.291290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:40208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.845 [2024-12-06 13:55:08.291304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:16:11.845 [2024-12-06 13:55:08.291322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:40776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.845 [2024-12-06 13:55:08.291336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:16:11.845 [2024-12-06 13:55:08.291354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:40256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.845 [2024-12-06 13:55:08.291368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:16:11.845 [2024-12-06 13:55:08.291398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.845 [2024-12-06 13:55:08.291431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:16:11.845 [2024-12-06 13:55:08.291451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:40416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.845 [2024-12-06 13:55:08.291465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:16:11.845 [2024-12-06 13:55:08.291484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:40464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.845 [2024-12-06 13:55:08.291498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:16:11.845 [2024-12-06 13:55:08.291517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:40072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.845 [2024-12-06 13:55:08.291530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:16:11.845 [2024-12-06 13:55:08.292214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:40600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.845 [2024-12-06 13:55:08.292239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:16:11.845 [2024-12-06 13:55:08.292264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:40632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.845 [2024-12-06 13:55:08.292279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:16:11.845 [2024-12-06 13:55:08.292298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:40392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.845 [2024-12-06 13:55:08.292312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:16:11.845 [2024-12-06 13:55:08.292331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:40800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.845 [2024-12-06 13:55:08.292345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:16:11.845 [2024-12-06 13:55:08.292369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:40816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.845 [2024-12-06 13:55:08.292384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:16:11.845 [2024-12-06 13:55:08.292403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:40832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.845 [2024-12-06 13:55:08.292416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:16:11.845 [2024-12-06 13:55:08.292435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:40848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.845 [2024-12-06 13:55:08.292449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:16:11.845 [2024-12-06 13:55:08.292468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:40864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.845 [2024-12-06 13:55:08.292481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:16:11.845 [2024-12-06 13:55:08.292500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.845 [2024-12-06 13:55:08.292525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:16:11.845 [2024-12-06 13:55:08.292545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:40896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.845 [2024-12-06 13:55:08.292560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:16:11.845 [2024-12-06 13:55:08.294000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:40912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.845 [2024-12-06 13:55:08.294028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:16:11.845 [2024-12-06 13:55:08.294054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:40472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.845 [2024-12-06 13:55:08.294072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:11.845 [2024-12-06 13:55:08.294093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:40536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.845 [2024-12-06 13:55:08.294107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:11.845 [2024-12-06 13:55:08.294128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:40568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.845 [2024-12-06 13:55:08.294143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:11.845 [2024-12-06 13:55:08.294186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:39816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.845 [2024-12-06 13:55:08.294201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:11.845 [2024-12-06 13:55:08.294252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:40064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.845 [2024-12-06 13:55:08.294268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:11.845 [2024-12-06 13:55:08.294288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.845 [2024-12-06 13:55:08.294304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:11.845 [2024-12-06 13:55:08.294324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:40408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.845 [2024-12-06 13:55:08.294338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:11.845 [2024-12-06 13:55:08.294358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:39968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.845 [2024-12-06 13:55:08.294389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:11.845 [2024-12-06 13:55:08.294410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:40088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.845 [2024-12-06 13:55:08.294425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:16:11.845 [2024-12-06 13:55:08.294447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:40776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.845 [2024-12-06 13:55:08.294462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:11.845 [2024-12-06 13:55:08.294496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:40672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.845 [2024-12-06 13:55:08.294512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:16:11.845 [2024-12-06 13:55:08.294533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:40464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.845 [2024-12-06 13:55:08.294548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:11.845 [2024-12-06 13:55:08.294569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:40008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.845 [2024-12-06 13:55:08.294584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:11.846 [2024-12-06 13:55:08.294605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:40936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.846 [2024-12-06 13:55:08.294620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:11.846 [2024-12-06 13:55:08.294641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:40664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.846 [2024-12-06 13:55:08.294656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:11.846 [2024-12-06 13:55:08.294677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:40688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.846 [2024-12-06 13:55:08.294692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:16:11.846 [2024-12-06 13:55:08.294713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:40728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.846 [2024-12-06 13:55:08.294736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:16:11.846 [2024-12-06 13:55:08.294786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:40944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.846 [2024-12-06 13:55:08.294800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:16:11.846 [2024-12-06 13:55:08.294820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.846 [2024-12-06 13:55:08.294834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:16:11.846 [2024-12-06 13:55:08.294854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:40976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.846 [2024-12-06 13:55:08.294883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:16:11.846 [2024-12-06 13:55:08.294902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:40992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.846 [2024-12-06 13:55:08.294945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:16:11.846 [2024-12-06 13:55:08.294963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:41008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.846 [2024-12-06 13:55:08.294976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:16:11.846 [2024-12-06 13:55:08.295002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:40192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.846 [2024-12-06 13:55:08.295016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:16:11.846 [2024-12-06 13:55:08.295034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:40312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.846 [2024-12-06 13:55:08.295047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:16:11.846 [2024-12-06 13:55:08.295065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:40632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.846 [2024-12-06 13:55:08.295078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:16:11.846 [2024-12-06 13:55:08.295096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:40800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.846 [2024-12-06 13:55:08.295125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:16:11.846 [2024-12-06 13:55:08.295144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:40832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.846 [2024-12-06 13:55:08.295158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:16:11.846 [2024-12-06 13:55:08.295176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:40864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.846 [2024-12-06 13:55:08.295190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:16:11.846 [2024-12-06 13:55:08.295221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:40896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.846 [2024-12-06 13:55:08.295237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:16:11.846 [2024-12-06 13:55:08.296437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:40592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.846 [2024-12-06 13:55:08.296463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:16:11.846 [2024-12-06 13:55:08.296515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:40376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.846 [2024-12-06 13:55:08.296532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:16:11.846 [2024-12-06 13:55:08.296552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:41024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.846 [2024-12-06 13:55:08.296565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:16:11.846 [2024-12-06 13:55:08.296583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:41040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.846 [2024-12-06 13:55:08.296614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:11.846 [2024-12-06 13:55:08.296632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:41056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.846 [2024-12-06 13:55:08.296646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:11.846 [2024-12-06 13:55:08.296664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:41072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.846 [2024-12-06 13:55:08.296690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:16:11.846 [2024-12-06 13:55:08.296711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:41088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:11.846 [2024-12-06 13:55:08.296725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:16:11.846 [2024-12-06 13:55:08.296744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:40488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.846 [2024-12-06 13:55:08.296758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:16:11.846 [2024-12-06 13:55:08.296777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:40736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.846 [2024-12-06 13:55:08.296790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:16:11.846 9550.44 IOPS, 37.31 MiB/s [2024-12-06T13:55:11.250Z] 9575.94 IOPS, 37.41 MiB/s [2024-12-06T13:55:11.250Z] 9596.65 IOPS, 37.49 MiB/s [2024-12-06T13:55:11.250Z] Received shutdown signal, test time was about 34.218896 seconds 00:16:11.846 00:16:11.846 Latency(us) 00:16:11.846 [2024-12-06T13:55:11.250Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:11.846 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:11.846 Verification LBA range: start 0x0 length 0x4000 00:16:11.846 Nvme0n1 : 34.22 9597.91 37.49 0.00 0.00 13308.11 210.39 4026531.84 00:16:11.846 [2024-12-06T13:55:11.250Z] =================================================================================================================== 00:16:11.846 [2024-12-06T13:55:11.250Z] Total : 9597.91 37.49 0.00 0.00 13308.11 210.39 4026531.84 00:16:12.105 13:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:12.105 13:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:16:12.105 13:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:12.105 13:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:16:12.105 13:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:12.105 13:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:16:12.365 13:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:12.365 13:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:16:12.365 13:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:12.365 13:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:12.365 rmmod nvme_tcp 00:16:12.365 rmmod nvme_fabrics 00:16:12.365 rmmod nvme_keyring 00:16:12.365 13:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:12.365 13:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:16:12.365 13:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:16:12.365 13:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 76331 ']' 00:16:12.365 13:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 76331 00:16:12.365 13:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 76331 ']' 00:16:12.365 13:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 76331 00:16:12.365 13:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:16:12.365 13:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:12.365 13:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76331 00:16:12.365 13:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:12.365 13:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:12.365 killing process with pid 76331 00:16:12.365 13:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76331' 00:16:12.365 13:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 76331 00:16:12.365 13:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 76331 00:16:12.625 13:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:12.625 13:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:12.625 13:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:12.625 13:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:16:12.625 13:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:16:12.625 13:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:12.625 13:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:16:12.625 13:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:12.625 13:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:12.625 13:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:12.625 13:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:12.625 13:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:12.625 13:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:12.625 13:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:12.625 13:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:12.625 13:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:12.625 13:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:12.625 13:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:12.625 13:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:12.625 13:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:12.625 13:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:12.625 13:55:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:12.625 13:55:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:12.625 13:55:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:12.625 13:55:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:12.625 13:55:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:12.885 13:55:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@300 -- # return 0 00:16:12.885 00:16:12.885 real 0m40.209s 00:16:12.885 user 2m9.929s 00:16:12.885 sys 0m12.236s 00:16:12.885 13:55:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:12.885 ************************************ 00:16:12.885 13:55:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:12.885 END TEST nvmf_host_multipath_status 00:16:12.885 ************************************ 00:16:12.885 13:55:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:16:12.885 13:55:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:12.885 13:55:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:12.885 13:55:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:12.885 ************************************ 00:16:12.885 START TEST nvmf_discovery_remove_ifc 00:16:12.885 ************************************ 00:16:12.885 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:16:12.885 * Looking for test storage... 00:16:12.885 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:12.885 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:12.885 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:12.885 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:16:12.885 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:12.885 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:12.885 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:12.885 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:12.885 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:16:12.885 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:16:12.885 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:16:12.885 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:16:12.885 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:16:12.885 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:16:12.885 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:16:12.885 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:12.885 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:16:12.885 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:16:12.885 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:12.885 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:12.885 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:16:12.885 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:16:12.885 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:12.885 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:16:12.885 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:16:12.885 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:16:12.885 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:16:12.885 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:12.885 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:16:12.885 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:16:12.885 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:12.885 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:12.885 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:16:12.885 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:12.886 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:12.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:12.886 --rc genhtml_branch_coverage=1 00:16:12.886 --rc genhtml_function_coverage=1 00:16:12.886 --rc genhtml_legend=1 00:16:12.886 --rc geninfo_all_blocks=1 00:16:12.886 --rc geninfo_unexecuted_blocks=1 00:16:12.886 00:16:12.886 ' 00:16:12.886 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:12.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:12.886 --rc genhtml_branch_coverage=1 00:16:12.886 --rc genhtml_function_coverage=1 00:16:12.886 --rc genhtml_legend=1 00:16:12.886 --rc geninfo_all_blocks=1 00:16:12.886 --rc geninfo_unexecuted_blocks=1 00:16:12.886 00:16:12.886 ' 00:16:12.886 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:12.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:12.886 --rc genhtml_branch_coverage=1 00:16:12.886 --rc genhtml_function_coverage=1 00:16:12.886 --rc genhtml_legend=1 00:16:12.886 --rc geninfo_all_blocks=1 00:16:12.886 --rc geninfo_unexecuted_blocks=1 00:16:12.886 00:16:12.886 ' 00:16:12.886 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:12.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:12.886 --rc genhtml_branch_coverage=1 00:16:12.886 --rc genhtml_function_coverage=1 00:16:12.886 --rc genhtml_legend=1 00:16:12.886 --rc geninfo_all_blocks=1 00:16:12.886 --rc geninfo_unexecuted_blocks=1 00:16:12.886 00:16:12.886 ' 00:16:12.886 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:12.886 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:16:12.886 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:12.886 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:12.886 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:12.886 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:12.886 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:12.886 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:12.886 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:12.886 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:12.886 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:12.886 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:13.146 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:16:13.146 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=cfa2def7-c8af-457f-82a0-b312efdea7f4 00:16:13.146 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:13.146 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:13.146 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:13.146 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:13.146 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:13.146 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:16:13.146 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:13.146 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:13.147 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:13.147 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.147 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.147 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.147 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:16:13.147 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.147 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:16:13.147 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:13.147 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:13.147 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:13.147 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:13.147 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:13.147 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:13.147 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:13.147 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:13.147 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:13.147 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:13.147 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:16:13.147 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:16:13.147 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:16:13.147 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:16:13.147 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:16:13.147 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:16:13.147 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:16:13.147 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:13.147 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:13.147 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:13.147 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:13.147 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:13.147 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:13.147 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:13.147 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:13.147 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:13.147 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:13.147 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:13.147 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:13.147 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:13.147 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:13.147 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:13.147 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:13.147 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:13.147 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:13.147 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:13.147 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:13.147 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:13.147 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:13.147 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:13.147 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:13.147 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:13.147 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:13.147 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:13.147 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:13.147 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:13.147 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:13.147 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:13.147 Cannot find device "nvmf_init_br" 00:16:13.147 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:16:13.147 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:13.147 Cannot find device "nvmf_init_br2" 00:16:13.147 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:16:13.147 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:13.148 Cannot find device "nvmf_tgt_br" 00:16:13.148 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # true 00:16:13.148 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:13.148 Cannot find device "nvmf_tgt_br2" 00:16:13.148 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # true 00:16:13.148 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:13.148 Cannot find device "nvmf_init_br" 00:16:13.148 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # true 00:16:13.148 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:13.148 Cannot find device "nvmf_init_br2" 00:16:13.148 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # true 00:16:13.148 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:13.148 Cannot find device "nvmf_tgt_br" 00:16:13.148 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # true 00:16:13.148 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:13.148 Cannot find device "nvmf_tgt_br2" 00:16:13.148 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # true 00:16:13.148 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:13.148 Cannot find device "nvmf_br" 00:16:13.148 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # true 00:16:13.148 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:13.148 Cannot find device "nvmf_init_if" 00:16:13.148 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # true 00:16:13.148 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:13.148 Cannot find device "nvmf_init_if2" 00:16:13.148 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # true 00:16:13.148 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:13.148 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:13.148 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # true 00:16:13.148 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:13.148 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:13.148 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # true 00:16:13.148 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:13.148 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:13.148 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:13.148 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:13.148 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:13.148 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:13.148 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:13.148 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:13.148 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:13.148 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:13.148 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:13.408 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:13.408 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:13.408 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:13.408 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:13.408 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:13.408 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:13.408 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:13.408 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:13.408 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:13.408 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:13.408 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:13.408 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:13.408 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:13.408 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:13.408 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:13.408 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:13.408 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:13.408 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:13.408 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:13.408 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:13.408 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:13.408 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:13.408 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:13.408 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.092 ms 00:16:13.408 00:16:13.408 --- 10.0.0.3 ping statistics --- 00:16:13.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:13.408 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:16:13.408 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:13.408 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:13.408 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.053 ms 00:16:13.408 00:16:13.408 --- 10.0.0.4 ping statistics --- 00:16:13.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:13.408 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:16:13.408 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:13.408 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:13.408 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:16:13.408 00:16:13.408 --- 10.0.0.1 ping statistics --- 00:16:13.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:13.408 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:16:13.408 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:13.408 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:13.408 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.043 ms 00:16:13.408 00:16:13.408 --- 10.0.0.2 ping statistics --- 00:16:13.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:13.408 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:16:13.408 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:13.408 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@461 -- # return 0 00:16:13.408 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:13.408 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:13.408 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:13.408 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:13.408 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:13.408 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:13.408 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:13.408 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:16:13.408 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:13.408 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:13.408 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:13.408 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=77227 00:16:13.408 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 77227 00:16:13.408 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:13.408 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 77227 ']' 00:16:13.408 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:13.408 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:13.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:13.408 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:13.408 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:13.408 13:55:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:13.408 [2024-12-06 13:55:12.803946] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:16:13.408 [2024-12-06 13:55:12.804032] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:13.668 [2024-12-06 13:55:12.957368] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:13.668 [2024-12-06 13:55:13.038712] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:13.668 [2024-12-06 13:55:13.038793] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:13.668 [2024-12-06 13:55:13.038807] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:13.668 [2024-12-06 13:55:13.038818] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:13.668 [2024-12-06 13:55:13.038828] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:13.668 [2024-12-06 13:55:13.039422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:13.927 [2024-12-06 13:55:13.117926] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:14.497 13:55:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:14.497 13:55:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:16:14.497 13:55:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:14.497 13:55:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:14.497 13:55:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:14.497 13:55:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:14.497 13:55:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:16:14.497 13:55:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.497 13:55:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:14.497 [2024-12-06 13:55:13.854987] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:14.497 [2024-12-06 13:55:13.863197] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:16:14.497 null0 00:16:14.497 [2024-12-06 13:55:13.895031] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:14.756 13:55:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.756 13:55:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=77260 00:16:14.756 13:55:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:16:14.756 13:55:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 77260 /tmp/host.sock 00:16:14.756 13:55:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 77260 ']' 00:16:14.756 13:55:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:16:14.756 13:55:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:14.756 13:55:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:16:14.756 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:16:14.756 13:55:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:14.756 13:55:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:14.756 [2024-12-06 13:55:13.963247] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:16:14.756 [2024-12-06 13:55:13.963346] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77260 ] 00:16:14.756 [2024-12-06 13:55:14.116748] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:15.015 [2024-12-06 13:55:14.160369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:15.015 13:55:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:15.015 13:55:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:16:15.015 13:55:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:15.015 13:55:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:16:15.015 13:55:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.015 13:55:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:15.015 13:55:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.015 13:55:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:16:15.015 13:55:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.015 13:55:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:15.015 [2024-12-06 13:55:14.306806] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:15.015 13:55:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.015 13:55:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:16:15.015 13:55:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.015 13:55:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:16.389 [2024-12-06 13:55:15.358517] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:16:16.389 [2024-12-06 13:55:15.358547] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:16:16.389 [2024-12-06 13:55:15.358572] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:16:16.389 [2024-12-06 13:55:15.364567] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:16:16.389 [2024-12-06 13:55:15.418999] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:16:16.389 [2024-12-06 13:55:15.420094] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x12770f0:1 started. 00:16:16.389 [2024-12-06 13:55:15.421970] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:16:16.389 [2024-12-06 13:55:15.422034] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:16:16.389 [2024-12-06 13:55:15.422066] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:16:16.389 [2024-12-06 13:55:15.422083] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:16:16.389 [2024-12-06 13:55:15.422121] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:16:16.389 13:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.389 13:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:16:16.389 13:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:16.389 13:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:16.389 13:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.389 13:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:16.389 13:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:16.389 [2024-12-06 13:55:15.426858] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x12770f0 was disconnected and freed. delete nvme_qpair. 00:16:16.389 13:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:16.389 13:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:16.389 13:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.389 13:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:16:16.389 13:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.3/24 dev nvmf_tgt_if 00:16:16.389 13:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:16:16.389 13:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:16:16.389 13:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:16.389 13:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:16.389 13:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:16.389 13:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.389 13:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:16.389 13:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:16.389 13:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:16.389 13:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.389 13:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:16.389 13:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:17.326 13:55:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:17.326 13:55:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:17.326 13:55:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.326 13:55:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:17.326 13:55:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:17.326 13:55:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:17.326 13:55:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:17.326 13:55:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.326 13:55:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:17.326 13:55:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:18.276 13:55:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:18.276 13:55:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:18.276 13:55:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:18.276 13:55:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.276 13:55:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:18.276 13:55:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:18.276 13:55:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:18.276 13:55:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.588 13:55:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:18.588 13:55:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:19.524 13:55:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:19.524 13:55:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:19.524 13:55:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:19.524 13:55:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.524 13:55:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:19.524 13:55:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:19.525 13:55:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:19.525 13:55:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.525 13:55:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:19.525 13:55:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:20.457 13:55:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:20.457 13:55:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:20.457 13:55:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.457 13:55:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:20.457 13:55:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:20.457 13:55:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:20.457 13:55:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:20.457 13:55:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.457 13:55:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:20.457 13:55:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:21.394 13:55:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:21.654 13:55:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:21.654 13:55:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:21.654 13:55:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.654 13:55:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:21.654 13:55:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:21.654 13:55:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:21.654 13:55:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.654 [2024-12-06 13:55:20.849382] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:16:21.654 [2024-12-06 13:55:20.849475] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:21.654 [2024-12-06 13:55:20.849490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:21.654 [2024-12-06 13:55:20.849503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:21.654 [2024-12-06 13:55:20.849512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:21.654 [2024-12-06 13:55:20.849521] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:21.654 [2024-12-06 13:55:20.849530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:21.654 [2024-12-06 13:55:20.849550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:21.654 [2024-12-06 13:55:20.849558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:21.654 [2024-12-06 13:55:20.849583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:21.654 [2024-12-06 13:55:20.849592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:21.654 [2024-12-06 13:55:20.849617] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126fe20 is same with the state(6) to be set 00:16:21.654 13:55:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:21.654 13:55:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:21.654 [2024-12-06 13:55:20.859401] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x126fe20 (9): Bad file descriptor 00:16:21.654 [2024-12-06 13:55:20.869436] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:16:21.655 [2024-12-06 13:55:20.869481] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:16:21.655 [2024-12-06 13:55:20.869507] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:16:21.655 [2024-12-06 13:55:20.869526] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:16:21.655 [2024-12-06 13:55:20.869597] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:16:22.590 13:55:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:22.590 13:55:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:22.590 13:55:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.590 13:55:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:22.590 13:55:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:22.590 13:55:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:22.590 13:55:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:22.590 [2024-12-06 13:55:21.924203] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:16:22.590 [2024-12-06 13:55:21.924284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x126fe20 with addr=10.0.0.3, port=4420 00:16:22.590 [2024-12-06 13:55:21.924306] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126fe20 is same with the state(6) to be set 00:16:22.590 [2024-12-06 13:55:21.924349] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x126fe20 (9): Bad file descriptor 00:16:22.590 [2024-12-06 13:55:21.924975] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:16:22.590 [2024-12-06 13:55:21.925078] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:16:22.590 [2024-12-06 13:55:21.925123] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:16:22.590 [2024-12-06 13:55:21.925145] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:16:22.590 [2024-12-06 13:55:21.925163] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:16:22.590 [2024-12-06 13:55:21.925175] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:16:22.590 [2024-12-06 13:55:21.925184] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:16:22.590 [2024-12-06 13:55:21.925202] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:16:22.590 [2024-12-06 13:55:21.925212] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:16:22.590 13:55:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.590 13:55:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:22.590 13:55:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:23.529 [2024-12-06 13:55:22.925262] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:16:23.529 [2024-12-06 13:55:22.925297] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:16:23.529 [2024-12-06 13:55:22.925321] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:16:23.529 [2024-12-06 13:55:22.925330] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:16:23.529 [2024-12-06 13:55:22.925339] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:16:23.529 [2024-12-06 13:55:22.925347] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:16:23.529 [2024-12-06 13:55:22.925353] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:16:23.529 [2024-12-06 13:55:22.925358] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:16:23.529 [2024-12-06 13:55:22.925388] bdev_nvme.c:7262:remove_discovery_entry: *INFO*: Discovery[10.0.0.3:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 00:16:23.529 [2024-12-06 13:55:22.925428] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:23.529 [2024-12-06 13:55:22.925442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.529 [2024-12-06 13:55:22.925453] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:23.529 [2024-12-06 13:55:22.925460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.529 [2024-12-06 13:55:22.925469] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:23.529 [2024-12-06 13:55:22.925476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.529 [2024-12-06 13:55:22.925485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:23.529 [2024-12-06 13:55:22.925492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.529 [2024-12-06 13:55:22.925500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:23.529 [2024-12-06 13:55:22.925507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.529 [2024-12-06 13:55:22.925516] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:16:23.529 [2024-12-06 13:55:22.925650] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11fba20 (9): Bad file descriptor 00:16:23.529 [2024-12-06 13:55:22.926662] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:16:23.529 [2024-12-06 13:55:22.926688] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:16:23.787 13:55:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:23.787 13:55:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:23.787 13:55:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:23.787 13:55:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.787 13:55:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:23.787 13:55:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:23.787 13:55:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:23.787 13:55:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.787 13:55:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:16:23.787 13:55:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:23.787 13:55:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:23.787 13:55:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:16:23.787 13:55:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:23.787 13:55:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:23.787 13:55:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:23.787 13:55:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.787 13:55:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:23.787 13:55:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:23.787 13:55:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:23.787 13:55:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.787 13:55:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:16:23.787 13:55:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:24.724 13:55:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:24.724 13:55:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:24.724 13:55:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:24.724 13:55:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.724 13:55:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:24.724 13:55:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:24.724 13:55:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:24.724 13:55:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.982 13:55:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:16:24.982 13:55:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:25.550 [2024-12-06 13:55:24.932706] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:16:25.551 [2024-12-06 13:55:24.932735] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:16:25.551 [2024-12-06 13:55:24.932768] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:16:25.551 [2024-12-06 13:55:24.938741] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme1 00:16:25.811 [2024-12-06 13:55:24.993014] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4420 00:16:25.811 [2024-12-06 13:55:24.993808] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x128f060:1 started. 00:16:25.811 [2024-12-06 13:55:24.995206] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:16:25.811 [2024-12-06 13:55:24.995262] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:16:25.811 [2024-12-06 13:55:24.995284] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:16:25.811 [2024-12-06 13:55:24.995299] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme1 done 00:16:25.812 [2024-12-06 13:55:24.995308] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:16:25.812 [2024-12-06 13:55:25.001176] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x128f060 was disconnected and freed. delete nvme_qpair. 00:16:25.812 13:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:25.812 13:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:25.812 13:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.812 13:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:25.812 13:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:25.812 13:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:25.812 13:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:25.812 13:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.812 13:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:16:25.812 13:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:16:25.812 13:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 77260 00:16:25.812 13:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 77260 ']' 00:16:25.812 13:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 77260 00:16:26.071 13:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:16:26.071 13:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:26.071 13:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77260 00:16:26.071 killing process with pid 77260 00:16:26.071 13:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:26.071 13:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:26.071 13:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77260' 00:16:26.071 13:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 77260 00:16:26.071 13:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 77260 00:16:26.071 13:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:16:26.071 13:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:26.071 13:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:16:26.331 13:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:26.331 13:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:16:26.331 13:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:26.331 13:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:26.331 rmmod nvme_tcp 00:16:26.331 rmmod nvme_fabrics 00:16:26.331 rmmod nvme_keyring 00:16:26.331 13:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:26.331 13:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:16:26.331 13:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:16:26.331 13:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 77227 ']' 00:16:26.331 13:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 77227 00:16:26.331 13:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 77227 ']' 00:16:26.331 13:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 77227 00:16:26.331 13:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:16:26.331 13:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:26.331 13:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77227 00:16:26.331 killing process with pid 77227 00:16:26.331 13:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:26.331 13:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:26.331 13:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77227' 00:16:26.331 13:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 77227 00:16:26.331 13:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 77227 00:16:26.590 13:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:26.590 13:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:26.590 13:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:26.590 13:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:16:26.590 13:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:16:26.590 13:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:26.590 13:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:16:26.590 13:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:26.590 13:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:26.590 13:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:26.590 13:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:26.590 13:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:26.590 13:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:26.590 13:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:26.590 13:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:26.590 13:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:26.590 13:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:26.590 13:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:26.590 13:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:26.848 13:55:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:26.848 13:55:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:26.848 13:55:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:26.848 13:55:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:26.848 13:55:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:26.848 13:55:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:26.848 13:55:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:26.848 13:55:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@300 -- # return 0 00:16:26.848 00:16:26.848 real 0m13.990s 00:16:26.848 user 0m23.260s 00:16:26.848 sys 0m2.628s 00:16:26.848 ************************************ 00:16:26.848 END TEST nvmf_discovery_remove_ifc 00:16:26.848 ************************************ 00:16:26.848 13:55:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:26.848 13:55:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:26.849 13:55:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:16:26.849 13:55:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:26.849 13:55:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:26.849 13:55:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:26.849 ************************************ 00:16:26.849 START TEST nvmf_identify_kernel_target 00:16:26.849 ************************************ 00:16:26.849 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:16:26.849 * Looking for test storage... 00:16:26.849 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:26.849 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:26.849 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:16:26.849 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:27.108 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:27.108 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:27.108 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:27.108 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:27.108 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:16:27.108 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:16:27.108 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:16:27.108 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:16:27.108 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:16:27.108 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:16:27.108 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:16:27.108 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:27.108 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:16:27.108 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:16:27.108 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:27.108 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:27.108 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:16:27.108 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:16:27.108 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:27.108 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:16:27.108 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:16:27.108 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:16:27.108 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:16:27.108 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:27.108 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:16:27.108 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:16:27.108 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:27.108 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:27.108 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:16:27.108 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:27.108 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:27.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:27.108 --rc genhtml_branch_coverage=1 00:16:27.108 --rc genhtml_function_coverage=1 00:16:27.108 --rc genhtml_legend=1 00:16:27.108 --rc geninfo_all_blocks=1 00:16:27.108 --rc geninfo_unexecuted_blocks=1 00:16:27.108 00:16:27.108 ' 00:16:27.108 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:27.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:27.108 --rc genhtml_branch_coverage=1 00:16:27.108 --rc genhtml_function_coverage=1 00:16:27.108 --rc genhtml_legend=1 00:16:27.108 --rc geninfo_all_blocks=1 00:16:27.108 --rc geninfo_unexecuted_blocks=1 00:16:27.108 00:16:27.108 ' 00:16:27.108 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:27.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:27.108 --rc genhtml_branch_coverage=1 00:16:27.108 --rc genhtml_function_coverage=1 00:16:27.108 --rc genhtml_legend=1 00:16:27.108 --rc geninfo_all_blocks=1 00:16:27.108 --rc geninfo_unexecuted_blocks=1 00:16:27.108 00:16:27.108 ' 00:16:27.108 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:27.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:27.108 --rc genhtml_branch_coverage=1 00:16:27.108 --rc genhtml_function_coverage=1 00:16:27.108 --rc genhtml_legend=1 00:16:27.108 --rc geninfo_all_blocks=1 00:16:27.108 --rc geninfo_unexecuted_blocks=1 00:16:27.108 00:16:27.108 ' 00:16:27.108 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:27.108 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:16:27.108 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:27.108 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:27.108 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:27.108 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:27.108 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:27.108 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:27.108 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:27.108 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:27.108 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:27.108 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:27.108 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:16:27.108 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=cfa2def7-c8af-457f-82a0-b312efdea7f4 00:16:27.108 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:27.108 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:27.108 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:27.108 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:27.108 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:27.108 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:16:27.108 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:27.108 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:27.108 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:27.108 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.108 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.108 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.108 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:16:27.108 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.108 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:16:27.108 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:27.108 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:27.108 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:27.108 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:27.108 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:27.108 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:27.108 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:27.109 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:27.109 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:27.109 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:27.109 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:16:27.109 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:27.109 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:27.109 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:27.109 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:27.109 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:27.109 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:27.109 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:27.109 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:27.109 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:27.109 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:27.109 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:27.109 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:27.109 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:27.109 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:27.109 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:27.109 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:27.109 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:27.109 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:27.109 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:27.109 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:27.109 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:27.109 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:27.109 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:27.109 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:27.109 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:27.109 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:27.109 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:27.109 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:27.109 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:27.109 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:27.109 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:27.109 Cannot find device "nvmf_init_br" 00:16:27.109 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:16:27.109 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:27.109 Cannot find device "nvmf_init_br2" 00:16:27.109 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:16:27.109 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:27.109 Cannot find device "nvmf_tgt_br" 00:16:27.109 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # true 00:16:27.109 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:27.109 Cannot find device "nvmf_tgt_br2" 00:16:27.109 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # true 00:16:27.109 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:27.109 Cannot find device "nvmf_init_br" 00:16:27.109 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # true 00:16:27.109 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:27.109 Cannot find device "nvmf_init_br2" 00:16:27.109 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # true 00:16:27.109 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:27.109 Cannot find device "nvmf_tgt_br" 00:16:27.109 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # true 00:16:27.109 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:27.109 Cannot find device "nvmf_tgt_br2" 00:16:27.109 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # true 00:16:27.109 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:27.109 Cannot find device "nvmf_br" 00:16:27.109 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # true 00:16:27.109 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:27.109 Cannot find device "nvmf_init_if" 00:16:27.109 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # true 00:16:27.109 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:27.109 Cannot find device "nvmf_init_if2" 00:16:27.109 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # true 00:16:27.109 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:27.109 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:27.109 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # true 00:16:27.109 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:27.109 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:27.368 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # true 00:16:27.368 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:27.368 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:27.368 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:27.368 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:27.368 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:27.368 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:27.368 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:27.368 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:27.368 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:27.368 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:27.368 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:27.368 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:27.368 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:27.368 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:27.368 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:27.368 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:27.368 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:27.368 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:27.368 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:27.368 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:27.368 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:27.368 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:27.368 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:27.368 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:27.368 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:27.368 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:27.368 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:27.368 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:27.368 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:27.368 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:27.368 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:27.368 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:27.368 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:27.368 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:27.368 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:16:27.368 00:16:27.368 --- 10.0.0.3 ping statistics --- 00:16:27.368 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:27.368 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:16:27.368 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:27.368 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:27.368 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.114 ms 00:16:27.368 00:16:27.368 --- 10.0.0.4 ping statistics --- 00:16:27.368 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:27.368 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:16:27.368 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:27.368 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:27.368 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.058 ms 00:16:27.368 00:16:27.368 --- 10.0.0.1 ping statistics --- 00:16:27.368 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:27.368 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:16:27.368 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:27.368 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:27.368 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:16:27.368 00:16:27.368 --- 10.0.0.2 ping statistics --- 00:16:27.368 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:27.368 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:16:27.368 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:27.368 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@461 -- # return 0 00:16:27.368 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:27.368 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:27.368 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:27.368 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:27.368 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:27.368 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:27.368 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:27.368 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:16:27.368 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:16:27.368 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:16:27.368 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:27.368 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:27.368 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:27.368 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:27.368 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:27.368 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:27.368 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:27.368 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:27.368 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:27.368 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:16:27.368 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:16:27.368 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:16:27.368 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:16:27.368 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:16:27.368 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:16:27.368 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:16:27.368 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:16:27.368 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:16:27.368 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:16:27.626 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:16:27.626 13:55:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:27.884 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:27.884 Waiting for block devices as requested 00:16:27.884 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:16:28.144 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:16:28.144 13:55:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:16:28.144 13:55:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:16:28.144 13:55:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:16:28.144 13:55:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:16:28.144 13:55:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:16:28.144 13:55:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:28.144 13:55:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:16:28.144 13:55:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:16:28.144 13:55:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:16:28.144 No valid GPT data, bailing 00:16:28.144 13:55:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:16:28.144 13:55:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:16:28.144 13:55:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:16:28.144 13:55:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:16:28.144 13:55:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:16:28.144 13:55:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:16:28.144 13:55:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:16:28.144 13:55:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:16:28.144 13:55:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:16:28.144 13:55:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:28.144 13:55:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:16:28.144 13:55:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:16:28.144 13:55:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:16:28.144 No valid GPT data, bailing 00:16:28.144 13:55:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:16:28.403 13:55:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:16:28.403 13:55:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:16:28.403 13:55:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:16:28.403 13:55:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:16:28.403 13:55:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:16:28.403 13:55:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:16:28.403 13:55:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:16:28.403 13:55:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:16:28.403 13:55:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:28.403 13:55:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:16:28.403 13:55:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:16:28.403 13:55:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:16:28.403 No valid GPT data, bailing 00:16:28.403 13:55:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:16:28.403 13:55:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:16:28.403 13:55:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:16:28.403 13:55:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:16:28.403 13:55:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:16:28.403 13:55:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:16:28.403 13:55:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:16:28.403 13:55:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:16:28.403 13:55:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:16:28.403 13:55:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:28.403 13:55:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:16:28.403 13:55:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:16:28.403 13:55:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:16:28.403 No valid GPT data, bailing 00:16:28.403 13:55:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:16:28.403 13:55:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:16:28.403 13:55:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:16:28.403 13:55:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:16:28.403 13:55:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:16:28.403 13:55:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:16:28.403 13:55:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:16:28.403 13:55:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:16:28.403 13:55:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:16:28.403 13:55:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:16:28.403 13:55:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:16:28.403 13:55:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:16:28.403 13:55:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:16:28.403 13:55:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:16:28.403 13:55:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:16:28.403 13:55:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:16:28.403 13:55:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:16:28.403 13:55:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --hostid=cfa2def7-c8af-457f-82a0-b312efdea7f4 -a 10.0.0.1 -t tcp -s 4420 00:16:28.403 00:16:28.404 Discovery Log Number of Records 2, Generation counter 2 00:16:28.404 =====Discovery Log Entry 0====== 00:16:28.404 trtype: tcp 00:16:28.404 adrfam: ipv4 00:16:28.404 subtype: current discovery subsystem 00:16:28.404 treq: not specified, sq flow control disable supported 00:16:28.404 portid: 1 00:16:28.404 trsvcid: 4420 00:16:28.404 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:28.404 traddr: 10.0.0.1 00:16:28.404 eflags: none 00:16:28.404 sectype: none 00:16:28.404 =====Discovery Log Entry 1====== 00:16:28.404 trtype: tcp 00:16:28.404 adrfam: ipv4 00:16:28.404 subtype: nvme subsystem 00:16:28.404 treq: not specified, sq flow control disable supported 00:16:28.404 portid: 1 00:16:28.404 trsvcid: 4420 00:16:28.404 subnqn: nqn.2016-06.io.spdk:testnqn 00:16:28.404 traddr: 10.0.0.1 00:16:28.404 eflags: none 00:16:28.404 sectype: none 00:16:28.404 13:55:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:16:28.404 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:16:28.661 ===================================================== 00:16:28.661 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:16:28.661 ===================================================== 00:16:28.661 Controller Capabilities/Features 00:16:28.661 ================================ 00:16:28.661 Vendor ID: 0000 00:16:28.661 Subsystem Vendor ID: 0000 00:16:28.661 Serial Number: c7f322e28313092fe7b6 00:16:28.661 Model Number: Linux 00:16:28.661 Firmware Version: 6.8.9-20 00:16:28.661 Recommended Arb Burst: 0 00:16:28.661 IEEE OUI Identifier: 00 00 00 00:16:28.661 Multi-path I/O 00:16:28.661 May have multiple subsystem ports: No 00:16:28.661 May have multiple controllers: No 00:16:28.661 Associated with SR-IOV VF: No 00:16:28.661 Max Data Transfer Size: Unlimited 00:16:28.661 Max Number of Namespaces: 0 00:16:28.661 Max Number of I/O Queues: 1024 00:16:28.661 NVMe Specification Version (VS): 1.3 00:16:28.661 NVMe Specification Version (Identify): 1.3 00:16:28.661 Maximum Queue Entries: 1024 00:16:28.661 Contiguous Queues Required: No 00:16:28.661 Arbitration Mechanisms Supported 00:16:28.661 Weighted Round Robin: Not Supported 00:16:28.661 Vendor Specific: Not Supported 00:16:28.661 Reset Timeout: 7500 ms 00:16:28.661 Doorbell Stride: 4 bytes 00:16:28.662 NVM Subsystem Reset: Not Supported 00:16:28.662 Command Sets Supported 00:16:28.662 NVM Command Set: Supported 00:16:28.662 Boot Partition: Not Supported 00:16:28.662 Memory Page Size Minimum: 4096 bytes 00:16:28.662 Memory Page Size Maximum: 4096 bytes 00:16:28.662 Persistent Memory Region: Not Supported 00:16:28.662 Optional Asynchronous Events Supported 00:16:28.662 Namespace Attribute Notices: Not Supported 00:16:28.662 Firmware Activation Notices: Not Supported 00:16:28.662 ANA Change Notices: Not Supported 00:16:28.662 PLE Aggregate Log Change Notices: Not Supported 00:16:28.662 LBA Status Info Alert Notices: Not Supported 00:16:28.662 EGE Aggregate Log Change Notices: Not Supported 00:16:28.662 Normal NVM Subsystem Shutdown event: Not Supported 00:16:28.662 Zone Descriptor Change Notices: Not Supported 00:16:28.662 Discovery Log Change Notices: Supported 00:16:28.662 Controller Attributes 00:16:28.662 128-bit Host Identifier: Not Supported 00:16:28.662 Non-Operational Permissive Mode: Not Supported 00:16:28.662 NVM Sets: Not Supported 00:16:28.662 Read Recovery Levels: Not Supported 00:16:28.662 Endurance Groups: Not Supported 00:16:28.662 Predictable Latency Mode: Not Supported 00:16:28.662 Traffic Based Keep ALive: Not Supported 00:16:28.662 Namespace Granularity: Not Supported 00:16:28.662 SQ Associations: Not Supported 00:16:28.662 UUID List: Not Supported 00:16:28.662 Multi-Domain Subsystem: Not Supported 00:16:28.662 Fixed Capacity Management: Not Supported 00:16:28.662 Variable Capacity Management: Not Supported 00:16:28.662 Delete Endurance Group: Not Supported 00:16:28.662 Delete NVM Set: Not Supported 00:16:28.662 Extended LBA Formats Supported: Not Supported 00:16:28.662 Flexible Data Placement Supported: Not Supported 00:16:28.662 00:16:28.662 Controller Memory Buffer Support 00:16:28.662 ================================ 00:16:28.662 Supported: No 00:16:28.662 00:16:28.662 Persistent Memory Region Support 00:16:28.662 ================================ 00:16:28.662 Supported: No 00:16:28.662 00:16:28.662 Admin Command Set Attributes 00:16:28.662 ============================ 00:16:28.662 Security Send/Receive: Not Supported 00:16:28.662 Format NVM: Not Supported 00:16:28.662 Firmware Activate/Download: Not Supported 00:16:28.662 Namespace Management: Not Supported 00:16:28.662 Device Self-Test: Not Supported 00:16:28.662 Directives: Not Supported 00:16:28.662 NVMe-MI: Not Supported 00:16:28.662 Virtualization Management: Not Supported 00:16:28.662 Doorbell Buffer Config: Not Supported 00:16:28.662 Get LBA Status Capability: Not Supported 00:16:28.662 Command & Feature Lockdown Capability: Not Supported 00:16:28.662 Abort Command Limit: 1 00:16:28.662 Async Event Request Limit: 1 00:16:28.662 Number of Firmware Slots: N/A 00:16:28.662 Firmware Slot 1 Read-Only: N/A 00:16:28.662 Firmware Activation Without Reset: N/A 00:16:28.662 Multiple Update Detection Support: N/A 00:16:28.662 Firmware Update Granularity: No Information Provided 00:16:28.662 Per-Namespace SMART Log: No 00:16:28.662 Asymmetric Namespace Access Log Page: Not Supported 00:16:28.662 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:16:28.662 Command Effects Log Page: Not Supported 00:16:28.662 Get Log Page Extended Data: Supported 00:16:28.662 Telemetry Log Pages: Not Supported 00:16:28.662 Persistent Event Log Pages: Not Supported 00:16:28.662 Supported Log Pages Log Page: May Support 00:16:28.662 Commands Supported & Effects Log Page: Not Supported 00:16:28.662 Feature Identifiers & Effects Log Page:May Support 00:16:28.662 NVMe-MI Commands & Effects Log Page: May Support 00:16:28.662 Data Area 4 for Telemetry Log: Not Supported 00:16:28.662 Error Log Page Entries Supported: 1 00:16:28.662 Keep Alive: Not Supported 00:16:28.662 00:16:28.662 NVM Command Set Attributes 00:16:28.662 ========================== 00:16:28.662 Submission Queue Entry Size 00:16:28.662 Max: 1 00:16:28.662 Min: 1 00:16:28.662 Completion Queue Entry Size 00:16:28.662 Max: 1 00:16:28.662 Min: 1 00:16:28.662 Number of Namespaces: 0 00:16:28.662 Compare Command: Not Supported 00:16:28.662 Write Uncorrectable Command: Not Supported 00:16:28.662 Dataset Management Command: Not Supported 00:16:28.662 Write Zeroes Command: Not Supported 00:16:28.662 Set Features Save Field: Not Supported 00:16:28.662 Reservations: Not Supported 00:16:28.662 Timestamp: Not Supported 00:16:28.662 Copy: Not Supported 00:16:28.662 Volatile Write Cache: Not Present 00:16:28.662 Atomic Write Unit (Normal): 1 00:16:28.662 Atomic Write Unit (PFail): 1 00:16:28.662 Atomic Compare & Write Unit: 1 00:16:28.662 Fused Compare & Write: Not Supported 00:16:28.662 Scatter-Gather List 00:16:28.662 SGL Command Set: Supported 00:16:28.662 SGL Keyed: Not Supported 00:16:28.662 SGL Bit Bucket Descriptor: Not Supported 00:16:28.662 SGL Metadata Pointer: Not Supported 00:16:28.662 Oversized SGL: Not Supported 00:16:28.662 SGL Metadata Address: Not Supported 00:16:28.662 SGL Offset: Supported 00:16:28.662 Transport SGL Data Block: Not Supported 00:16:28.662 Replay Protected Memory Block: Not Supported 00:16:28.662 00:16:28.662 Firmware Slot Information 00:16:28.662 ========================= 00:16:28.662 Active slot: 0 00:16:28.662 00:16:28.662 00:16:28.662 Error Log 00:16:28.662 ========= 00:16:28.662 00:16:28.662 Active Namespaces 00:16:28.662 ================= 00:16:28.662 Discovery Log Page 00:16:28.662 ================== 00:16:28.662 Generation Counter: 2 00:16:28.662 Number of Records: 2 00:16:28.662 Record Format: 0 00:16:28.662 00:16:28.662 Discovery Log Entry 0 00:16:28.662 ---------------------- 00:16:28.662 Transport Type: 3 (TCP) 00:16:28.662 Address Family: 1 (IPv4) 00:16:28.662 Subsystem Type: 3 (Current Discovery Subsystem) 00:16:28.662 Entry Flags: 00:16:28.662 Duplicate Returned Information: 0 00:16:28.662 Explicit Persistent Connection Support for Discovery: 0 00:16:28.662 Transport Requirements: 00:16:28.662 Secure Channel: Not Specified 00:16:28.662 Port ID: 1 (0x0001) 00:16:28.662 Controller ID: 65535 (0xffff) 00:16:28.662 Admin Max SQ Size: 32 00:16:28.662 Transport Service Identifier: 4420 00:16:28.662 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:16:28.662 Transport Address: 10.0.0.1 00:16:28.662 Discovery Log Entry 1 00:16:28.662 ---------------------- 00:16:28.662 Transport Type: 3 (TCP) 00:16:28.662 Address Family: 1 (IPv4) 00:16:28.662 Subsystem Type: 2 (NVM Subsystem) 00:16:28.662 Entry Flags: 00:16:28.662 Duplicate Returned Information: 0 00:16:28.662 Explicit Persistent Connection Support for Discovery: 0 00:16:28.662 Transport Requirements: 00:16:28.662 Secure Channel: Not Specified 00:16:28.662 Port ID: 1 (0x0001) 00:16:28.662 Controller ID: 65535 (0xffff) 00:16:28.662 Admin Max SQ Size: 32 00:16:28.662 Transport Service Identifier: 4420 00:16:28.662 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:16:28.662 Transport Address: 10.0.0.1 00:16:28.662 13:55:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:16:28.921 get_feature(0x01) failed 00:16:28.921 get_feature(0x02) failed 00:16:28.921 get_feature(0x04) failed 00:16:28.921 ===================================================== 00:16:28.921 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:16:28.921 ===================================================== 00:16:28.921 Controller Capabilities/Features 00:16:28.921 ================================ 00:16:28.921 Vendor ID: 0000 00:16:28.921 Subsystem Vendor ID: 0000 00:16:28.921 Serial Number: 8a7b5a22f75150dee184 00:16:28.921 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:16:28.921 Firmware Version: 6.8.9-20 00:16:28.921 Recommended Arb Burst: 6 00:16:28.921 IEEE OUI Identifier: 00 00 00 00:16:28.921 Multi-path I/O 00:16:28.921 May have multiple subsystem ports: Yes 00:16:28.921 May have multiple controllers: Yes 00:16:28.921 Associated with SR-IOV VF: No 00:16:28.921 Max Data Transfer Size: Unlimited 00:16:28.921 Max Number of Namespaces: 1024 00:16:28.921 Max Number of I/O Queues: 128 00:16:28.921 NVMe Specification Version (VS): 1.3 00:16:28.921 NVMe Specification Version (Identify): 1.3 00:16:28.921 Maximum Queue Entries: 1024 00:16:28.921 Contiguous Queues Required: No 00:16:28.921 Arbitration Mechanisms Supported 00:16:28.921 Weighted Round Robin: Not Supported 00:16:28.921 Vendor Specific: Not Supported 00:16:28.921 Reset Timeout: 7500 ms 00:16:28.921 Doorbell Stride: 4 bytes 00:16:28.921 NVM Subsystem Reset: Not Supported 00:16:28.921 Command Sets Supported 00:16:28.921 NVM Command Set: Supported 00:16:28.921 Boot Partition: Not Supported 00:16:28.921 Memory Page Size Minimum: 4096 bytes 00:16:28.921 Memory Page Size Maximum: 4096 bytes 00:16:28.921 Persistent Memory Region: Not Supported 00:16:28.921 Optional Asynchronous Events Supported 00:16:28.921 Namespace Attribute Notices: Supported 00:16:28.921 Firmware Activation Notices: Not Supported 00:16:28.921 ANA Change Notices: Supported 00:16:28.921 PLE Aggregate Log Change Notices: Not Supported 00:16:28.921 LBA Status Info Alert Notices: Not Supported 00:16:28.921 EGE Aggregate Log Change Notices: Not Supported 00:16:28.921 Normal NVM Subsystem Shutdown event: Not Supported 00:16:28.921 Zone Descriptor Change Notices: Not Supported 00:16:28.921 Discovery Log Change Notices: Not Supported 00:16:28.921 Controller Attributes 00:16:28.921 128-bit Host Identifier: Supported 00:16:28.921 Non-Operational Permissive Mode: Not Supported 00:16:28.921 NVM Sets: Not Supported 00:16:28.921 Read Recovery Levels: Not Supported 00:16:28.921 Endurance Groups: Not Supported 00:16:28.921 Predictable Latency Mode: Not Supported 00:16:28.921 Traffic Based Keep ALive: Supported 00:16:28.921 Namespace Granularity: Not Supported 00:16:28.921 SQ Associations: Not Supported 00:16:28.921 UUID List: Not Supported 00:16:28.921 Multi-Domain Subsystem: Not Supported 00:16:28.921 Fixed Capacity Management: Not Supported 00:16:28.921 Variable Capacity Management: Not Supported 00:16:28.921 Delete Endurance Group: Not Supported 00:16:28.921 Delete NVM Set: Not Supported 00:16:28.921 Extended LBA Formats Supported: Not Supported 00:16:28.921 Flexible Data Placement Supported: Not Supported 00:16:28.921 00:16:28.921 Controller Memory Buffer Support 00:16:28.921 ================================ 00:16:28.921 Supported: No 00:16:28.921 00:16:28.921 Persistent Memory Region Support 00:16:28.921 ================================ 00:16:28.921 Supported: No 00:16:28.921 00:16:28.921 Admin Command Set Attributes 00:16:28.921 ============================ 00:16:28.921 Security Send/Receive: Not Supported 00:16:28.921 Format NVM: Not Supported 00:16:28.921 Firmware Activate/Download: Not Supported 00:16:28.921 Namespace Management: Not Supported 00:16:28.921 Device Self-Test: Not Supported 00:16:28.921 Directives: Not Supported 00:16:28.921 NVMe-MI: Not Supported 00:16:28.921 Virtualization Management: Not Supported 00:16:28.921 Doorbell Buffer Config: Not Supported 00:16:28.921 Get LBA Status Capability: Not Supported 00:16:28.921 Command & Feature Lockdown Capability: Not Supported 00:16:28.921 Abort Command Limit: 4 00:16:28.921 Async Event Request Limit: 4 00:16:28.921 Number of Firmware Slots: N/A 00:16:28.921 Firmware Slot 1 Read-Only: N/A 00:16:28.921 Firmware Activation Without Reset: N/A 00:16:28.921 Multiple Update Detection Support: N/A 00:16:28.921 Firmware Update Granularity: No Information Provided 00:16:28.921 Per-Namespace SMART Log: Yes 00:16:28.921 Asymmetric Namespace Access Log Page: Supported 00:16:28.921 ANA Transition Time : 10 sec 00:16:28.921 00:16:28.921 Asymmetric Namespace Access Capabilities 00:16:28.921 ANA Optimized State : Supported 00:16:28.921 ANA Non-Optimized State : Supported 00:16:28.921 ANA Inaccessible State : Supported 00:16:28.921 ANA Persistent Loss State : Supported 00:16:28.921 ANA Change State : Supported 00:16:28.921 ANAGRPID is not changed : No 00:16:28.921 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:16:28.921 00:16:28.921 ANA Group Identifier Maximum : 128 00:16:28.921 Number of ANA Group Identifiers : 128 00:16:28.921 Max Number of Allowed Namespaces : 1024 00:16:28.921 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:16:28.921 Command Effects Log Page: Supported 00:16:28.921 Get Log Page Extended Data: Supported 00:16:28.921 Telemetry Log Pages: Not Supported 00:16:28.921 Persistent Event Log Pages: Not Supported 00:16:28.921 Supported Log Pages Log Page: May Support 00:16:28.921 Commands Supported & Effects Log Page: Not Supported 00:16:28.921 Feature Identifiers & Effects Log Page:May Support 00:16:28.921 NVMe-MI Commands & Effects Log Page: May Support 00:16:28.921 Data Area 4 for Telemetry Log: Not Supported 00:16:28.921 Error Log Page Entries Supported: 128 00:16:28.921 Keep Alive: Supported 00:16:28.921 Keep Alive Granularity: 1000 ms 00:16:28.921 00:16:28.921 NVM Command Set Attributes 00:16:28.921 ========================== 00:16:28.921 Submission Queue Entry Size 00:16:28.921 Max: 64 00:16:28.921 Min: 64 00:16:28.921 Completion Queue Entry Size 00:16:28.921 Max: 16 00:16:28.921 Min: 16 00:16:28.921 Number of Namespaces: 1024 00:16:28.921 Compare Command: Not Supported 00:16:28.921 Write Uncorrectable Command: Not Supported 00:16:28.921 Dataset Management Command: Supported 00:16:28.921 Write Zeroes Command: Supported 00:16:28.921 Set Features Save Field: Not Supported 00:16:28.921 Reservations: Not Supported 00:16:28.921 Timestamp: Not Supported 00:16:28.922 Copy: Not Supported 00:16:28.922 Volatile Write Cache: Present 00:16:28.922 Atomic Write Unit (Normal): 1 00:16:28.922 Atomic Write Unit (PFail): 1 00:16:28.922 Atomic Compare & Write Unit: 1 00:16:28.922 Fused Compare & Write: Not Supported 00:16:28.922 Scatter-Gather List 00:16:28.922 SGL Command Set: Supported 00:16:28.922 SGL Keyed: Not Supported 00:16:28.922 SGL Bit Bucket Descriptor: Not Supported 00:16:28.922 SGL Metadata Pointer: Not Supported 00:16:28.922 Oversized SGL: Not Supported 00:16:28.922 SGL Metadata Address: Not Supported 00:16:28.922 SGL Offset: Supported 00:16:28.922 Transport SGL Data Block: Not Supported 00:16:28.922 Replay Protected Memory Block: Not Supported 00:16:28.922 00:16:28.922 Firmware Slot Information 00:16:28.922 ========================= 00:16:28.922 Active slot: 0 00:16:28.922 00:16:28.922 Asymmetric Namespace Access 00:16:28.922 =========================== 00:16:28.922 Change Count : 0 00:16:28.922 Number of ANA Group Descriptors : 1 00:16:28.922 ANA Group Descriptor : 0 00:16:28.922 ANA Group ID : 1 00:16:28.922 Number of NSID Values : 1 00:16:28.922 Change Count : 0 00:16:28.922 ANA State : 1 00:16:28.922 Namespace Identifier : 1 00:16:28.922 00:16:28.922 Commands Supported and Effects 00:16:28.922 ============================== 00:16:28.922 Admin Commands 00:16:28.922 -------------- 00:16:28.922 Get Log Page (02h): Supported 00:16:28.922 Identify (06h): Supported 00:16:28.922 Abort (08h): Supported 00:16:28.922 Set Features (09h): Supported 00:16:28.922 Get Features (0Ah): Supported 00:16:28.922 Asynchronous Event Request (0Ch): Supported 00:16:28.922 Keep Alive (18h): Supported 00:16:28.922 I/O Commands 00:16:28.922 ------------ 00:16:28.922 Flush (00h): Supported 00:16:28.922 Write (01h): Supported LBA-Change 00:16:28.922 Read (02h): Supported 00:16:28.922 Write Zeroes (08h): Supported LBA-Change 00:16:28.922 Dataset Management (09h): Supported 00:16:28.922 00:16:28.922 Error Log 00:16:28.922 ========= 00:16:28.922 Entry: 0 00:16:28.922 Error Count: 0x3 00:16:28.922 Submission Queue Id: 0x0 00:16:28.922 Command Id: 0x5 00:16:28.922 Phase Bit: 0 00:16:28.922 Status Code: 0x2 00:16:28.922 Status Code Type: 0x0 00:16:28.922 Do Not Retry: 1 00:16:28.922 Error Location: 0x28 00:16:28.922 LBA: 0x0 00:16:28.922 Namespace: 0x0 00:16:28.922 Vendor Log Page: 0x0 00:16:28.922 ----------- 00:16:28.922 Entry: 1 00:16:28.922 Error Count: 0x2 00:16:28.922 Submission Queue Id: 0x0 00:16:28.922 Command Id: 0x5 00:16:28.922 Phase Bit: 0 00:16:28.922 Status Code: 0x2 00:16:28.922 Status Code Type: 0x0 00:16:28.922 Do Not Retry: 1 00:16:28.922 Error Location: 0x28 00:16:28.922 LBA: 0x0 00:16:28.922 Namespace: 0x0 00:16:28.922 Vendor Log Page: 0x0 00:16:28.922 ----------- 00:16:28.922 Entry: 2 00:16:28.922 Error Count: 0x1 00:16:28.922 Submission Queue Id: 0x0 00:16:28.922 Command Id: 0x4 00:16:28.922 Phase Bit: 0 00:16:28.922 Status Code: 0x2 00:16:28.922 Status Code Type: 0x0 00:16:28.922 Do Not Retry: 1 00:16:28.922 Error Location: 0x28 00:16:28.922 LBA: 0x0 00:16:28.922 Namespace: 0x0 00:16:28.922 Vendor Log Page: 0x0 00:16:28.922 00:16:28.922 Number of Queues 00:16:28.922 ================ 00:16:28.922 Number of I/O Submission Queues: 128 00:16:28.922 Number of I/O Completion Queues: 128 00:16:28.922 00:16:28.922 ZNS Specific Controller Data 00:16:28.922 ============================ 00:16:28.922 Zone Append Size Limit: 0 00:16:28.922 00:16:28.922 00:16:28.922 Active Namespaces 00:16:28.922 ================= 00:16:28.922 get_feature(0x05) failed 00:16:28.922 Namespace ID:1 00:16:28.922 Command Set Identifier: NVM (00h) 00:16:28.922 Deallocate: Supported 00:16:28.922 Deallocated/Unwritten Error: Not Supported 00:16:28.922 Deallocated Read Value: Unknown 00:16:28.922 Deallocate in Write Zeroes: Not Supported 00:16:28.922 Deallocated Guard Field: 0xFFFF 00:16:28.922 Flush: Supported 00:16:28.922 Reservation: Not Supported 00:16:28.922 Namespace Sharing Capabilities: Multiple Controllers 00:16:28.922 Size (in LBAs): 1310720 (5GiB) 00:16:28.922 Capacity (in LBAs): 1310720 (5GiB) 00:16:28.922 Utilization (in LBAs): 1310720 (5GiB) 00:16:28.922 UUID: 975b4505-d97b-4511-821a-98d9541d538a 00:16:28.922 Thin Provisioning: Not Supported 00:16:28.922 Per-NS Atomic Units: Yes 00:16:28.922 Atomic Boundary Size (Normal): 0 00:16:28.922 Atomic Boundary Size (PFail): 0 00:16:28.922 Atomic Boundary Offset: 0 00:16:28.922 NGUID/EUI64 Never Reused: No 00:16:28.922 ANA group ID: 1 00:16:28.922 Namespace Write Protected: No 00:16:28.922 Number of LBA Formats: 1 00:16:28.922 Current LBA Format: LBA Format #00 00:16:28.922 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:16:28.922 00:16:28.922 13:55:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:16:28.922 13:55:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:28.922 13:55:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:16:28.922 13:55:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:28.922 13:55:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:16:28.922 13:55:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:28.922 13:55:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:28.922 rmmod nvme_tcp 00:16:28.922 rmmod nvme_fabrics 00:16:28.922 13:55:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:28.922 13:55:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:16:28.922 13:55:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:16:28.922 13:55:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:16:28.922 13:55:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:28.922 13:55:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:28.922 13:55:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:28.922 13:55:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:16:28.922 13:55:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:28.922 13:55:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:16:28.922 13:55:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:16:28.922 13:55:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:28.922 13:55:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:28.922 13:55:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:28.922 13:55:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:29.181 13:55:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:29.181 13:55:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:29.181 13:55:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:29.181 13:55:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:29.181 13:55:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:29.181 13:55:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:29.181 13:55:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:29.181 13:55:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:29.181 13:55:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:29.181 13:55:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:29.181 13:55:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:29.181 13:55:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:29.181 13:55:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:29.181 13:55:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:29.181 13:55:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:29.181 13:55:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@300 -- # return 0 00:16:29.181 13:55:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:16:29.181 13:55:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:16:29.181 13:55:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:16:29.440 13:55:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:16:29.440 13:55:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:16:29.440 13:55:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:16:29.440 13:55:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:16:29.440 13:55:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:16:29.440 13:55:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:16:29.440 13:55:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:30.022 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:30.022 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:30.280 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:30.280 00:16:30.280 real 0m3.395s 00:16:30.280 user 0m1.196s 00:16:30.280 sys 0m1.510s 00:16:30.280 13:55:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:30.280 13:55:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.280 ************************************ 00:16:30.280 END TEST nvmf_identify_kernel_target 00:16:30.280 ************************************ 00:16:30.280 13:55:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:16:30.280 13:55:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:30.280 13:55:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:30.280 13:55:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:30.280 ************************************ 00:16:30.280 START TEST nvmf_auth_host 00:16:30.280 ************************************ 00:16:30.280 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:16:30.280 * Looking for test storage... 00:16:30.280 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:30.280 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:30.280 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:16:30.280 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:30.537 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:30.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.538 --rc genhtml_branch_coverage=1 00:16:30.538 --rc genhtml_function_coverage=1 00:16:30.538 --rc genhtml_legend=1 00:16:30.538 --rc geninfo_all_blocks=1 00:16:30.538 --rc geninfo_unexecuted_blocks=1 00:16:30.538 00:16:30.538 ' 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:30.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.538 --rc genhtml_branch_coverage=1 00:16:30.538 --rc genhtml_function_coverage=1 00:16:30.538 --rc genhtml_legend=1 00:16:30.538 --rc geninfo_all_blocks=1 00:16:30.538 --rc geninfo_unexecuted_blocks=1 00:16:30.538 00:16:30.538 ' 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:30.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.538 --rc genhtml_branch_coverage=1 00:16:30.538 --rc genhtml_function_coverage=1 00:16:30.538 --rc genhtml_legend=1 00:16:30.538 --rc geninfo_all_blocks=1 00:16:30.538 --rc geninfo_unexecuted_blocks=1 00:16:30.538 00:16:30.538 ' 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:30.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.538 --rc genhtml_branch_coverage=1 00:16:30.538 --rc genhtml_function_coverage=1 00:16:30.538 --rc genhtml_legend=1 00:16:30.538 --rc geninfo_all_blocks=1 00:16:30.538 --rc geninfo_unexecuted_blocks=1 00:16:30.538 00:16:30.538 ' 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=cfa2def7-c8af-457f-82a0-b312efdea7f4 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:30.538 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:30.538 Cannot find device "nvmf_init_br" 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:30.538 Cannot find device "nvmf_init_br2" 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:30.538 Cannot find device "nvmf_tgt_br" 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # true 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:30.538 Cannot find device "nvmf_tgt_br2" 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # true 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:30.538 Cannot find device "nvmf_init_br" 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # true 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:30.538 Cannot find device "nvmf_init_br2" 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # true 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:30.538 Cannot find device "nvmf_tgt_br" 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # true 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:30.538 Cannot find device "nvmf_tgt_br2" 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # true 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:30.538 Cannot find device "nvmf_br" 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # true 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:30.538 Cannot find device "nvmf_init_if" 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # true 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:30.538 Cannot find device "nvmf_init_if2" 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # true 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:30.538 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # true 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:30.538 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # true 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:30.538 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:30.795 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:30.795 13:55:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:30.795 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:30.795 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:30.795 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:30.795 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:30.795 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:30.795 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:30.795 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:30.795 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:30.795 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:30.795 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:30.795 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:30.795 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:30.795 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:30.795 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:30.795 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:30.795 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:30.795 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:30.795 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:30.795 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:30.795 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:30.795 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:30.795 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:30.795 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:30.795 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:30.795 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:30.795 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:30.795 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:30.795 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:30.795 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:16:30.795 00:16:30.795 --- 10.0.0.3 ping statistics --- 00:16:30.795 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.795 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:16:30.795 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:30.795 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:30.795 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.055 ms 00:16:30.795 00:16:30.795 --- 10.0.0.4 ping statistics --- 00:16:30.795 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.795 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:16:30.795 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:30.795 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:30.795 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:16:30.795 00:16:30.795 --- 10.0.0.1 ping statistics --- 00:16:30.795 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.795 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:16:30.795 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:31.052 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:31.052 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:16:31.052 00:16:31.052 --- 10.0.0.2 ping statistics --- 00:16:31.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:31.053 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:16:31.053 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:31.053 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@461 -- # return 0 00:16:31.053 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:31.053 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:31.053 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:31.053 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:31.053 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:31.053 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:31.053 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:31.053 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:16:31.053 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:31.053 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:31.053 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:31.053 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=78246 00:16:31.053 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:16:31.053 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 78246 00:16:31.053 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 78246 ']' 00:16:31.053 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:31.053 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:31.053 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:31.053 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:31.053 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:31.310 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:31.310 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:16:31.310 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:31.310 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:31.310 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:31.310 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:31.310 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:16:31.310 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:16:31.310 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:16:31.310 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:31.310 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:16:31.310 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:16:31.310 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:16:31.310 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:31.310 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=07e042cc27a2fc4616096c45701ba9f2 00:16:31.310 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:16:31.310 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.fLE 00:16:31.310 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 07e042cc27a2fc4616096c45701ba9f2 0 00:16:31.310 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 07e042cc27a2fc4616096c45701ba9f2 0 00:16:31.310 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:16:31.310 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:31.310 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=07e042cc27a2fc4616096c45701ba9f2 00:16:31.310 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:16:31.310 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:16:31.568 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.fLE 00:16:31.568 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.fLE 00:16:31.568 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.fLE 00:16:31.568 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:16:31.568 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:16:31.568 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:31.568 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:16:31.568 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:16:31.568 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:16:31.568 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:31.568 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=f68fbd52078da5116edb219ce6bc75908a7cf4cbc128059eca7daf538c0a2e44 00:16:31.568 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:16:31.568 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.bqS 00:16:31.568 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key f68fbd52078da5116edb219ce6bc75908a7cf4cbc128059eca7daf538c0a2e44 3 00:16:31.568 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 f68fbd52078da5116edb219ce6bc75908a7cf4cbc128059eca7daf538c0a2e44 3 00:16:31.568 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:16:31.568 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:31.568 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=f68fbd52078da5116edb219ce6bc75908a7cf4cbc128059eca7daf538c0a2e44 00:16:31.568 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:16:31.568 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:16:31.568 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.bqS 00:16:31.568 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.bqS 00:16:31.568 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.bqS 00:16:31.568 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:16:31.568 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:16:31.568 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:31.568 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:16:31.568 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:16:31.568 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:16:31.568 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:31.568 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e1528f9c5272e538eed8d02db309657299fe0c82a34ae0ea 00:16:31.568 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:16:31.568 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.rqs 00:16:31.569 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e1528f9c5272e538eed8d02db309657299fe0c82a34ae0ea 0 00:16:31.569 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e1528f9c5272e538eed8d02db309657299fe0c82a34ae0ea 0 00:16:31.569 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:16:31.569 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:31.569 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e1528f9c5272e538eed8d02db309657299fe0c82a34ae0ea 00:16:31.569 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:16:31.569 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:16:31.569 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.rqs 00:16:31.569 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.rqs 00:16:31.569 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.rqs 00:16:31.569 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:16:31.569 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:16:31.569 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:31.569 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:16:31.569 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:16:31.569 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:16:31.569 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:31.569 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=7c022af687cf6a72e43fc73d2872dd6413fd70a8715929a2 00:16:31.569 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:16:31.569 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.FDY 00:16:31.569 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 7c022af687cf6a72e43fc73d2872dd6413fd70a8715929a2 2 00:16:31.569 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 7c022af687cf6a72e43fc73d2872dd6413fd70a8715929a2 2 00:16:31.569 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:16:31.569 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:31.569 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=7c022af687cf6a72e43fc73d2872dd6413fd70a8715929a2 00:16:31.569 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:16:31.569 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:16:31.569 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.FDY 00:16:31.569 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.FDY 00:16:31.569 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.FDY 00:16:31.569 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:16:31.569 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:16:31.569 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:31.569 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:16:31.569 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:16:31.569 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:16:31.569 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:31.569 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=bb9160f239784d76ef4a5e5ed7e5ac4d 00:16:31.569 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:16:31.569 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.F2F 00:16:31.569 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key bb9160f239784d76ef4a5e5ed7e5ac4d 1 00:16:31.569 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 bb9160f239784d76ef4a5e5ed7e5ac4d 1 00:16:31.569 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:16:31.569 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:31.569 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=bb9160f239784d76ef4a5e5ed7e5ac4d 00:16:31.569 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:16:31.569 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:16:31.827 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.F2F 00:16:31.827 13:55:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.F2F 00:16:31.827 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.F2F 00:16:31.827 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:16:31.827 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:16:31.827 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:31.827 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:16:31.827 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:16:31.827 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:16:31.827 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:31.827 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e432ee195584c7020cc46cf1d29ace55 00:16:31.827 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:16:31.827 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.09O 00:16:31.827 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e432ee195584c7020cc46cf1d29ace55 1 00:16:31.827 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e432ee195584c7020cc46cf1d29ace55 1 00:16:31.827 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:16:31.827 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:31.827 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e432ee195584c7020cc46cf1d29ace55 00:16:31.827 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:16:31.827 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:16:31.827 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.09O 00:16:31.827 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.09O 00:16:31.827 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.09O 00:16:31.827 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:16:31.827 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:16:31.827 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:31.827 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:16:31.827 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:16:31.827 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:16:31.827 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:31.827 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=8ad438bdf79316215418a518ac2ecc35fa44e534aa1c229e 00:16:31.827 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:16:31.827 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Ec7 00:16:31.827 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 8ad438bdf79316215418a518ac2ecc35fa44e534aa1c229e 2 00:16:31.827 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 8ad438bdf79316215418a518ac2ecc35fa44e534aa1c229e 2 00:16:31.827 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:16:31.827 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:31.827 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=8ad438bdf79316215418a518ac2ecc35fa44e534aa1c229e 00:16:31.827 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:16:31.827 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:16:31.827 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Ec7 00:16:31.827 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Ec7 00:16:31.827 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.Ec7 00:16:31.827 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:16:31.827 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:16:31.827 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:31.827 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:16:31.827 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:16:31.827 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:16:31.827 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:31.827 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=a6b31a3b69d5c3d81250c4b020bf6a7c 00:16:31.827 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:16:31.827 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.ae2 00:16:31.827 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key a6b31a3b69d5c3d81250c4b020bf6a7c 0 00:16:31.827 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 a6b31a3b69d5c3d81250c4b020bf6a7c 0 00:16:31.827 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:16:31.827 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:31.827 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=a6b31a3b69d5c3d81250c4b020bf6a7c 00:16:31.827 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:16:31.827 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:16:31.827 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.ae2 00:16:31.827 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.ae2 00:16:31.827 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.ae2 00:16:31.827 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:16:31.827 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:16:31.827 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:31.827 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:16:31.827 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:16:31.827 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:16:31.827 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:31.828 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ddf9fc64a590a315513d0f2b6ed8dcbe1edbf36122e366981d437da8c10a1a93 00:16:31.828 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:16:31.828 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.haX 00:16:31.828 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ddf9fc64a590a315513d0f2b6ed8dcbe1edbf36122e366981d437da8c10a1a93 3 00:16:31.828 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ddf9fc64a590a315513d0f2b6ed8dcbe1edbf36122e366981d437da8c10a1a93 3 00:16:31.828 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:16:31.828 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:31.828 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ddf9fc64a590a315513d0f2b6ed8dcbe1edbf36122e366981d437da8c10a1a93 00:16:31.828 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:16:31.828 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:16:32.085 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.haX 00:16:32.085 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.haX 00:16:32.085 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.haX 00:16:32.085 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:16:32.085 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 78246 00:16:32.085 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 78246 ']' 00:16:32.085 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:32.085 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:32.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:32.085 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:32.085 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:32.085 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:32.354 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:32.354 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:16:32.354 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:16:32.354 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.fLE 00:16:32.354 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.354 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:32.354 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.354 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.bqS ]] 00:16:32.354 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.bqS 00:16:32.354 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.354 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:32.354 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.354 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:16:32.354 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.rqs 00:16:32.354 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.354 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:32.354 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.354 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.FDY ]] 00:16:32.355 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.FDY 00:16:32.355 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.355 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:32.355 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.355 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:16:32.355 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.F2F 00:16:32.355 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.355 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:32.355 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.355 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.09O ]] 00:16:32.355 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.09O 00:16:32.355 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.355 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:32.355 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.355 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:16:32.355 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.Ec7 00:16:32.355 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.355 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:32.355 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.355 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.ae2 ]] 00:16:32.355 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.ae2 00:16:32.355 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.355 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:32.355 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.355 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:16:32.355 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.haX 00:16:32.355 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.355 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:32.355 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.355 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:16:32.355 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:16:32.355 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:16:32.355 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:32.355 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:32.355 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:32.355 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:32.355 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:32.355 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:32.355 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:32.355 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:32.355 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:32.355 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:32.355 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:16:32.355 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:16:32.355 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:16:32.355 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:16:32.355 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:16:32.355 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:16:32.355 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:16:32.355 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:16:32.355 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:16:32.355 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:16:32.355 13:55:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:32.627 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:32.884 Waiting for block devices as requested 00:16:32.884 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:16:32.884 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:16:33.451 13:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:16:33.451 13:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:16:33.451 13:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:16:33.451 13:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:16:33.451 13:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:16:33.451 13:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:33.451 13:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:16:33.451 13:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:16:33.451 13:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:16:33.451 No valid GPT data, bailing 00:16:33.451 13:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:16:33.451 13:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:16:33.451 13:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:16:33.451 13:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:16:33.451 13:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:16:33.451 13:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:16:33.451 13:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:16:33.451 13:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:16:33.451 13:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:16:33.451 13:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:33.451 13:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:16:33.451 13:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:16:33.451 13:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:16:33.711 No valid GPT data, bailing 00:16:33.711 13:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:16:33.711 13:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:16:33.711 13:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:16:33.711 13:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:16:33.711 13:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:16:33.711 13:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:16:33.711 13:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:16:33.711 13:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:16:33.711 13:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:16:33.711 13:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:33.711 13:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:16:33.711 13:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:16:33.711 13:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:16:33.711 No valid GPT data, bailing 00:16:33.711 13:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:16:33.711 13:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:16:33.711 13:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:16:33.711 13:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:16:33.711 13:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:16:33.711 13:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:16:33.711 13:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:16:33.711 13:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:16:33.711 13:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:16:33.711 13:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:33.711 13:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:16:33.711 13:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:16:33.711 13:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:16:33.711 No valid GPT data, bailing 00:16:33.711 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:16:33.711 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:16:33.711 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:16:33.711 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:16:33.711 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:16:33.711 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:16:33.711 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:16:33.711 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:16:33.711 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:16:33.711 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:16:33.711 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:16:33.711 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:16:33.711 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:16:33.711 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:16:33.711 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:16:33.711 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:16:33.711 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:16:33.711 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --hostid=cfa2def7-c8af-457f-82a0-b312efdea7f4 -a 10.0.0.1 -t tcp -s 4420 00:16:33.711 00:16:33.711 Discovery Log Number of Records 2, Generation counter 2 00:16:33.711 =====Discovery Log Entry 0====== 00:16:33.711 trtype: tcp 00:16:33.711 adrfam: ipv4 00:16:33.711 subtype: current discovery subsystem 00:16:33.711 treq: not specified, sq flow control disable supported 00:16:33.711 portid: 1 00:16:33.711 trsvcid: 4420 00:16:33.711 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:33.711 traddr: 10.0.0.1 00:16:33.711 eflags: none 00:16:33.711 sectype: none 00:16:33.711 =====Discovery Log Entry 1====== 00:16:33.711 trtype: tcp 00:16:33.711 adrfam: ipv4 00:16:33.711 subtype: nvme subsystem 00:16:33.711 treq: not specified, sq flow control disable supported 00:16:33.711 portid: 1 00:16:33.711 trsvcid: 4420 00:16:33.711 subnqn: nqn.2024-02.io.spdk:cnode0 00:16:33.711 traddr: 10.0.0.1 00:16:33.711 eflags: none 00:16:33.711 sectype: none 00:16:33.711 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:16:33.711 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:16:33.711 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:16:33.711 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:16:33.711 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:33.711 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:33.711 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:33.711 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:33.712 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTE1MjhmOWM1MjcyZTUzOGVlZDhkMDJkYjMwOTY1NzI5OWZlMGM4MmEzNGFlMGVhGwOoaQ==: 00:16:33.712 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2MwMjJhZjY4N2NmNmE3MmU0M2ZjNzNkMjg3MmRkNjQxM2ZkNzBhODcxNTkyOWEyQ4RbWA==: 00:16:33.712 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:33.712 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:33.971 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTE1MjhmOWM1MjcyZTUzOGVlZDhkMDJkYjMwOTY1NzI5OWZlMGM4MmEzNGFlMGVhGwOoaQ==: 00:16:33.972 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2MwMjJhZjY4N2NmNmE3MmU0M2ZjNzNkMjg3MmRkNjQxM2ZkNzBhODcxNTkyOWEyQ4RbWA==: ]] 00:16:33.972 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2MwMjJhZjY4N2NmNmE3MmU0M2ZjNzNkMjg3MmRkNjQxM2ZkNzBhODcxNTkyOWEyQ4RbWA==: 00:16:33.972 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:16:33.972 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:16:33.972 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:16:33.972 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:33.972 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:16:33.972 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:33.972 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:16:33.972 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:33.972 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:33.972 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:33.972 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:33.972 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.972 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:33.972 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.972 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:33.972 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:33.972 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:33.972 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:33.972 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:33.972 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:33.972 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:33.972 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:33.972 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:33.972 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:33.972 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:33.972 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:33.972 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.972 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:33.972 nvme0n1 00:16:33.972 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.972 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:33.972 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.972 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:33.972 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:33.972 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.232 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.232 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:34.232 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.232 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:34.232 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.232 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:16:34.232 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:34.232 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:34.232 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:16:34.232 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:34.232 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:34.232 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:34.232 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:34.232 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDdlMDQyY2MyN2EyZmM0NjE2MDk2YzQ1NzAxYmE5ZjI2cdk4: 00:16:34.232 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjY4ZmJkNTIwNzhkYTUxMTZlZGIyMTljZTZiYzc1OTA4YTdjZjRjYmMxMjgwNTllY2E3ZGFmNTM4YzBhMmU0NLEDezg=: 00:16:34.232 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:34.232 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:34.232 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDdlMDQyY2MyN2EyZmM0NjE2MDk2YzQ1NzAxYmE5ZjI2cdk4: 00:16:34.232 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjY4ZmJkNTIwNzhkYTUxMTZlZGIyMTljZTZiYzc1OTA4YTdjZjRjYmMxMjgwNTllY2E3ZGFmNTM4YzBhMmU0NLEDezg=: ]] 00:16:34.232 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjY4ZmJkNTIwNzhkYTUxMTZlZGIyMTljZTZiYzc1OTA4YTdjZjRjYmMxMjgwNTllY2E3ZGFmNTM4YzBhMmU0NLEDezg=: 00:16:34.232 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:16:34.232 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:34.232 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:34.232 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:34.232 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:34.232 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:34.232 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:34.232 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.232 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:34.232 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.232 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:34.232 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:34.232 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:34.232 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:34.232 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:34.232 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:34.232 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:34.232 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:34.232 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:34.232 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:34.232 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:34.232 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:34.232 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.232 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:34.232 nvme0n1 00:16:34.232 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.232 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:34.232 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:34.232 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.232 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:34.232 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.232 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.232 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:34.232 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.232 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:34.232 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.232 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:34.232 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:16:34.232 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:34.232 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:34.232 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:34.232 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:34.232 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTE1MjhmOWM1MjcyZTUzOGVlZDhkMDJkYjMwOTY1NzI5OWZlMGM4MmEzNGFlMGVhGwOoaQ==: 00:16:34.232 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2MwMjJhZjY4N2NmNmE3MmU0M2ZjNzNkMjg3MmRkNjQxM2ZkNzBhODcxNTkyOWEyQ4RbWA==: 00:16:34.232 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:34.232 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:34.232 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTE1MjhmOWM1MjcyZTUzOGVlZDhkMDJkYjMwOTY1NzI5OWZlMGM4MmEzNGFlMGVhGwOoaQ==: 00:16:34.232 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2MwMjJhZjY4N2NmNmE3MmU0M2ZjNzNkMjg3MmRkNjQxM2ZkNzBhODcxNTkyOWEyQ4RbWA==: ]] 00:16:34.232 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2MwMjJhZjY4N2NmNmE3MmU0M2ZjNzNkMjg3MmRkNjQxM2ZkNzBhODcxNTkyOWEyQ4RbWA==: 00:16:34.232 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:16:34.232 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:34.232 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:34.232 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:34.232 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:34.232 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:34.232 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:34.232 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.232 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:34.232 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.232 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:34.232 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:34.232 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:34.232 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:34.233 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:34.233 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:34.233 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:34.233 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:34.233 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:34.233 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:34.233 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:34.233 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.233 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.233 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:34.492 nvme0n1 00:16:34.492 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.492 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:34.492 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.492 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:34.492 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:34.492 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.492 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.492 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:34.492 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.492 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:34.492 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.492 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:34.492 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:16:34.492 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:34.492 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:34.492 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:34.492 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:34.492 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmI5MTYwZjIzOTc4NGQ3NmVmNGE1ZTVlZDdlNWFjNGQiiCma: 00:16:34.492 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTQzMmVlMTk1NTg0YzcwMjBjYzQ2Y2YxZDI5YWNlNTUCCDNS: 00:16:34.492 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:34.492 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:34.492 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmI5MTYwZjIzOTc4NGQ3NmVmNGE1ZTVlZDdlNWFjNGQiiCma: 00:16:34.492 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTQzMmVlMTk1NTg0YzcwMjBjYzQ2Y2YxZDI5YWNlNTUCCDNS: ]] 00:16:34.492 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTQzMmVlMTk1NTg0YzcwMjBjYzQ2Y2YxZDI5YWNlNTUCCDNS: 00:16:34.492 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:16:34.492 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:34.492 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:34.492 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:34.492 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:34.492 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:34.492 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:34.492 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.492 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:34.492 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.492 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:34.492 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:34.492 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:34.492 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:34.492 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:34.492 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:34.492 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:34.492 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:34.492 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:34.492 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:34.492 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:34.492 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:34.492 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.492 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:34.492 nvme0n1 00:16:34.492 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.492 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:34.492 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:34.492 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.492 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:34.492 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.752 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.752 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:34.752 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.752 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:34.752 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.752 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:34.752 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:16:34.752 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:34.752 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:34.752 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:34.752 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:34.752 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGFkNDM4YmRmNzkzMTYyMTU0MThhNTE4YWMyZWNjMzVmYTQ0ZTUzNGFhMWMyMjllsh5kQg==: 00:16:34.752 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTZiMzFhM2I2OWQ1YzNkODEyNTBjNGIwMjBiZjZhN2MnzkgE: 00:16:34.752 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:34.752 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:34.752 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGFkNDM4YmRmNzkzMTYyMTU0MThhNTE4YWMyZWNjMzVmYTQ0ZTUzNGFhMWMyMjllsh5kQg==: 00:16:34.752 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTZiMzFhM2I2OWQ1YzNkODEyNTBjNGIwMjBiZjZhN2MnzkgE: ]] 00:16:34.752 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTZiMzFhM2I2OWQ1YzNkODEyNTBjNGIwMjBiZjZhN2MnzkgE: 00:16:34.752 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:16:34.752 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:34.752 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:34.752 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:34.752 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:34.752 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:34.752 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:34.752 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.752 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:34.752 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.752 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:34.752 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:34.752 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:34.752 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:34.752 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:34.752 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:34.752 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:34.752 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:34.752 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:34.752 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:34.752 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:34.752 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:34.752 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.752 13:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:34.752 nvme0n1 00:16:34.752 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.752 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:34.752 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.752 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:34.752 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:34.752 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.752 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.752 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:34.752 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.752 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:34.752 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.752 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:34.752 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:16:34.752 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:34.752 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:34.752 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:34.753 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:34.753 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGRmOWZjNjRhNTkwYTMxNTUxM2QwZjJiNmVkOGRjYmUxZWRiZjM2MTIyZTM2Njk4MWQ0MzdkYThjMTBhMWE5M8Ml828=: 00:16:34.753 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:34.753 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:34.753 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:34.753 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGRmOWZjNjRhNTkwYTMxNTUxM2QwZjJiNmVkOGRjYmUxZWRiZjM2MTIyZTM2Njk4MWQ0MzdkYThjMTBhMWE5M8Ml828=: 00:16:34.753 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:34.753 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:16:34.753 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:34.753 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:34.753 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:34.753 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:34.753 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:34.753 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:34.753 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.753 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:34.753 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.753 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:34.753 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:34.753 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:34.753 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:34.753 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:34.753 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:34.753 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:34.753 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:34.753 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:34.753 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:34.753 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:34.753 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:34.753 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.753 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:35.012 nvme0n1 00:16:35.012 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.012 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:35.012 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.012 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:35.012 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:35.012 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.012 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.012 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:35.012 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.012 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:35.012 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.012 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:35.012 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:35.012 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:16:35.012 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:35.012 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:35.012 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:35.012 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:35.012 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDdlMDQyY2MyN2EyZmM0NjE2MDk2YzQ1NzAxYmE5ZjI2cdk4: 00:16:35.012 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjY4ZmJkNTIwNzhkYTUxMTZlZGIyMTljZTZiYzc1OTA4YTdjZjRjYmMxMjgwNTllY2E3ZGFmNTM4YzBhMmU0NLEDezg=: 00:16:35.012 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:35.012 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:35.271 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDdlMDQyY2MyN2EyZmM0NjE2MDk2YzQ1NzAxYmE5ZjI2cdk4: 00:16:35.271 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjY4ZmJkNTIwNzhkYTUxMTZlZGIyMTljZTZiYzc1OTA4YTdjZjRjYmMxMjgwNTllY2E3ZGFmNTM4YzBhMmU0NLEDezg=: ]] 00:16:35.271 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjY4ZmJkNTIwNzhkYTUxMTZlZGIyMTljZTZiYzc1OTA4YTdjZjRjYmMxMjgwNTllY2E3ZGFmNTM4YzBhMmU0NLEDezg=: 00:16:35.271 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:16:35.271 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:35.271 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:35.271 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:35.271 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:35.271 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:35.271 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:35.271 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.271 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:35.271 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.271 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:35.271 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:35.271 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:35.271 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:35.271 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:35.271 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:35.271 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:35.271 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:35.271 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:35.271 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:35.271 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:35.271 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:35.271 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.271 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:35.531 nvme0n1 00:16:35.531 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.531 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:35.531 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:35.531 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.531 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:35.531 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.531 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.531 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:35.531 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.531 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:35.531 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.531 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:35.531 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:16:35.531 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:35.531 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:35.531 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:35.531 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:35.531 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTE1MjhmOWM1MjcyZTUzOGVlZDhkMDJkYjMwOTY1NzI5OWZlMGM4MmEzNGFlMGVhGwOoaQ==: 00:16:35.531 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2MwMjJhZjY4N2NmNmE3MmU0M2ZjNzNkMjg3MmRkNjQxM2ZkNzBhODcxNTkyOWEyQ4RbWA==: 00:16:35.531 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:35.531 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:35.531 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTE1MjhmOWM1MjcyZTUzOGVlZDhkMDJkYjMwOTY1NzI5OWZlMGM4MmEzNGFlMGVhGwOoaQ==: 00:16:35.531 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2MwMjJhZjY4N2NmNmE3MmU0M2ZjNzNkMjg3MmRkNjQxM2ZkNzBhODcxNTkyOWEyQ4RbWA==: ]] 00:16:35.531 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2MwMjJhZjY4N2NmNmE3MmU0M2ZjNzNkMjg3MmRkNjQxM2ZkNzBhODcxNTkyOWEyQ4RbWA==: 00:16:35.531 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:16:35.531 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:35.531 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:35.531 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:35.531 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:35.531 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:35.531 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:35.531 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.531 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:35.531 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.531 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:35.531 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:35.531 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:35.531 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:35.531 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:35.531 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:35.531 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:35.531 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:35.531 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:35.531 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:35.531 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:35.531 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:35.531 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.531 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:35.791 nvme0n1 00:16:35.791 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.791 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:35.791 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.791 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:35.791 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:35.791 13:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.791 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.791 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:35.791 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.791 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:35.791 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.791 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:35.791 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:16:35.791 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:35.791 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:35.791 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:35.791 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:35.791 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmI5MTYwZjIzOTc4NGQ3NmVmNGE1ZTVlZDdlNWFjNGQiiCma: 00:16:35.791 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTQzMmVlMTk1NTg0YzcwMjBjYzQ2Y2YxZDI5YWNlNTUCCDNS: 00:16:35.791 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:35.791 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:35.791 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmI5MTYwZjIzOTc4NGQ3NmVmNGE1ZTVlZDdlNWFjNGQiiCma: 00:16:35.791 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTQzMmVlMTk1NTg0YzcwMjBjYzQ2Y2YxZDI5YWNlNTUCCDNS: ]] 00:16:35.791 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTQzMmVlMTk1NTg0YzcwMjBjYzQ2Y2YxZDI5YWNlNTUCCDNS: 00:16:35.791 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:16:35.791 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:35.791 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:35.791 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:35.791 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:35.791 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:35.791 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:35.791 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.791 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:35.791 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.791 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:35.791 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:35.791 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:35.791 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:35.791 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:35.791 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:35.791 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:35.791 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:35.791 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:35.791 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:35.791 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:35.791 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.791 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.791 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:35.791 nvme0n1 00:16:35.791 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.791 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:35.791 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.791 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:35.791 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:35.791 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.050 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.050 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:36.050 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.050 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:36.050 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.050 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:36.050 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:16:36.050 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:36.050 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:36.050 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:36.050 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:36.050 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGFkNDM4YmRmNzkzMTYyMTU0MThhNTE4YWMyZWNjMzVmYTQ0ZTUzNGFhMWMyMjllsh5kQg==: 00:16:36.050 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTZiMzFhM2I2OWQ1YzNkODEyNTBjNGIwMjBiZjZhN2MnzkgE: 00:16:36.050 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:36.050 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:36.050 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGFkNDM4YmRmNzkzMTYyMTU0MThhNTE4YWMyZWNjMzVmYTQ0ZTUzNGFhMWMyMjllsh5kQg==: 00:16:36.050 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTZiMzFhM2I2OWQ1YzNkODEyNTBjNGIwMjBiZjZhN2MnzkgE: ]] 00:16:36.050 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTZiMzFhM2I2OWQ1YzNkODEyNTBjNGIwMjBiZjZhN2MnzkgE: 00:16:36.050 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:16:36.050 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:36.050 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:36.050 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:36.050 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:36.050 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:36.050 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:36.050 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.050 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:36.050 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.050 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:36.050 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:36.050 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:36.050 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:36.050 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:36.051 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:36.051 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:36.051 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:36.051 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:36.051 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:36.051 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:36.051 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:36.051 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.051 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:36.051 nvme0n1 00:16:36.051 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.051 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:36.051 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:36.051 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.051 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:36.051 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.051 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.051 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:36.051 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.051 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:36.051 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.051 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:36.051 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:16:36.051 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:36.051 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:36.051 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:36.051 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:36.051 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGRmOWZjNjRhNTkwYTMxNTUxM2QwZjJiNmVkOGRjYmUxZWRiZjM2MTIyZTM2Njk4MWQ0MzdkYThjMTBhMWE5M8Ml828=: 00:16:36.051 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:36.051 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:36.051 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:36.051 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGRmOWZjNjRhNTkwYTMxNTUxM2QwZjJiNmVkOGRjYmUxZWRiZjM2MTIyZTM2Njk4MWQ0MzdkYThjMTBhMWE5M8Ml828=: 00:16:36.051 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:36.051 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:16:36.051 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:36.051 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:36.051 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:36.051 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:36.051 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:36.051 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:36.051 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.051 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:36.310 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.310 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:36.310 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:36.310 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:36.310 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:36.310 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:36.310 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:36.310 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:36.310 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:36.310 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:36.310 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:36.310 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:36.310 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:36.310 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.310 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:36.310 nvme0n1 00:16:36.310 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.310 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:36.310 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.310 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:36.310 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:36.310 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.310 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.310 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:36.310 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.310 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:36.310 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.310 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:36.310 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:36.310 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:16:36.310 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:36.310 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:36.310 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:36.310 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:36.310 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDdlMDQyY2MyN2EyZmM0NjE2MDk2YzQ1NzAxYmE5ZjI2cdk4: 00:16:36.310 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjY4ZmJkNTIwNzhkYTUxMTZlZGIyMTljZTZiYzc1OTA4YTdjZjRjYmMxMjgwNTllY2E3ZGFmNTM4YzBhMmU0NLEDezg=: 00:16:36.310 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:36.310 13:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:36.878 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDdlMDQyY2MyN2EyZmM0NjE2MDk2YzQ1NzAxYmE5ZjI2cdk4: 00:16:36.878 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjY4ZmJkNTIwNzhkYTUxMTZlZGIyMTljZTZiYzc1OTA4YTdjZjRjYmMxMjgwNTllY2E3ZGFmNTM4YzBhMmU0NLEDezg=: ]] 00:16:36.878 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjY4ZmJkNTIwNzhkYTUxMTZlZGIyMTljZTZiYzc1OTA4YTdjZjRjYmMxMjgwNTllY2E3ZGFmNTM4YzBhMmU0NLEDezg=: 00:16:36.878 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:16:36.878 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:36.878 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:36.878 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:36.878 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:36.878 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:36.878 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:36.878 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.878 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:36.878 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.878 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:36.878 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:36.878 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:36.878 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:36.878 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:36.878 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:36.878 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:36.878 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:36.878 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:36.878 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:36.878 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:36.878 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:36.878 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.878 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:37.137 nvme0n1 00:16:37.137 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.137 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:37.137 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:37.137 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.137 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:37.137 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.137 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.137 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:37.137 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.137 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:37.396 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.396 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:37.396 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:16:37.396 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:37.396 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:37.396 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:37.396 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:37.396 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTE1MjhmOWM1MjcyZTUzOGVlZDhkMDJkYjMwOTY1NzI5OWZlMGM4MmEzNGFlMGVhGwOoaQ==: 00:16:37.396 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2MwMjJhZjY4N2NmNmE3MmU0M2ZjNzNkMjg3MmRkNjQxM2ZkNzBhODcxNTkyOWEyQ4RbWA==: 00:16:37.396 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:37.396 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:37.396 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTE1MjhmOWM1MjcyZTUzOGVlZDhkMDJkYjMwOTY1NzI5OWZlMGM4MmEzNGFlMGVhGwOoaQ==: 00:16:37.396 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2MwMjJhZjY4N2NmNmE3MmU0M2ZjNzNkMjg3MmRkNjQxM2ZkNzBhODcxNTkyOWEyQ4RbWA==: ]] 00:16:37.396 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2MwMjJhZjY4N2NmNmE3MmU0M2ZjNzNkMjg3MmRkNjQxM2ZkNzBhODcxNTkyOWEyQ4RbWA==: 00:16:37.396 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:16:37.396 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:37.396 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:37.396 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:37.396 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:37.396 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:37.396 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:37.396 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.396 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:37.396 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.396 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:37.396 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:37.396 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:37.396 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:37.396 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:37.396 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:37.396 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:37.396 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:37.396 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:37.396 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:37.396 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:37.396 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:37.396 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.396 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:37.396 nvme0n1 00:16:37.396 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.396 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:37.396 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.396 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:37.396 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:37.396 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.655 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.655 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:37.655 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.655 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:37.655 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.655 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:37.655 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:16:37.655 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:37.655 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:37.655 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:37.655 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:37.655 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmI5MTYwZjIzOTc4NGQ3NmVmNGE1ZTVlZDdlNWFjNGQiiCma: 00:16:37.655 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTQzMmVlMTk1NTg0YzcwMjBjYzQ2Y2YxZDI5YWNlNTUCCDNS: 00:16:37.655 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:37.655 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:37.655 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmI5MTYwZjIzOTc4NGQ3NmVmNGE1ZTVlZDdlNWFjNGQiiCma: 00:16:37.655 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTQzMmVlMTk1NTg0YzcwMjBjYzQ2Y2YxZDI5YWNlNTUCCDNS: ]] 00:16:37.655 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTQzMmVlMTk1NTg0YzcwMjBjYzQ2Y2YxZDI5YWNlNTUCCDNS: 00:16:37.655 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:16:37.655 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:37.655 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:37.655 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:37.655 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:37.655 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:37.655 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:37.655 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.655 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:37.655 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.655 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:37.655 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:37.655 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:37.655 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:37.655 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:37.655 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:37.655 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:37.655 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:37.656 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:37.656 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:37.656 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:37.656 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:37.656 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.656 13:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:37.656 nvme0n1 00:16:37.656 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.656 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:37.656 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:37.656 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.656 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:37.656 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.914 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.914 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:37.914 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.914 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:37.914 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.914 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:37.914 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:16:37.914 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:37.914 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:37.914 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:37.914 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:37.914 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGFkNDM4YmRmNzkzMTYyMTU0MThhNTE4YWMyZWNjMzVmYTQ0ZTUzNGFhMWMyMjllsh5kQg==: 00:16:37.914 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTZiMzFhM2I2OWQ1YzNkODEyNTBjNGIwMjBiZjZhN2MnzkgE: 00:16:37.914 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:37.914 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:37.914 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGFkNDM4YmRmNzkzMTYyMTU0MThhNTE4YWMyZWNjMzVmYTQ0ZTUzNGFhMWMyMjllsh5kQg==: 00:16:37.914 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTZiMzFhM2I2OWQ1YzNkODEyNTBjNGIwMjBiZjZhN2MnzkgE: ]] 00:16:37.914 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTZiMzFhM2I2OWQ1YzNkODEyNTBjNGIwMjBiZjZhN2MnzkgE: 00:16:37.914 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:16:37.914 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:37.914 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:37.914 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:37.914 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:37.914 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:37.914 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:37.914 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.914 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:37.914 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.914 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:37.914 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:37.914 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:37.914 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:37.914 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:37.914 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:37.914 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:37.914 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:37.914 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:37.914 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:37.914 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:37.915 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:37.915 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.915 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:37.915 nvme0n1 00:16:37.915 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.915 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:37.915 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.915 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:37.915 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:37.915 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.173 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.173 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:38.173 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.173 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:38.173 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.173 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:38.173 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:16:38.173 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:38.173 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:38.173 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:38.173 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:38.173 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGRmOWZjNjRhNTkwYTMxNTUxM2QwZjJiNmVkOGRjYmUxZWRiZjM2MTIyZTM2Njk4MWQ0MzdkYThjMTBhMWE5M8Ml828=: 00:16:38.173 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:38.173 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:38.173 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:38.173 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGRmOWZjNjRhNTkwYTMxNTUxM2QwZjJiNmVkOGRjYmUxZWRiZjM2MTIyZTM2Njk4MWQ0MzdkYThjMTBhMWE5M8Ml828=: 00:16:38.173 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:38.173 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:16:38.173 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:38.173 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:38.173 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:38.173 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:38.173 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:38.173 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:38.173 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.173 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:38.173 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.173 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:38.173 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:38.173 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:38.173 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:38.173 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:38.173 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:38.173 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:38.173 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:38.173 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:38.173 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:38.174 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:38.174 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:38.174 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.174 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:38.174 nvme0n1 00:16:38.174 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.174 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:38.174 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:38.174 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.174 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:38.174 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.432 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.432 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:38.432 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.432 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:38.432 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.432 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:38.432 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:38.432 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:16:38.432 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:38.432 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:38.432 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:38.432 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:38.432 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDdlMDQyY2MyN2EyZmM0NjE2MDk2YzQ1NzAxYmE5ZjI2cdk4: 00:16:38.432 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjY4ZmJkNTIwNzhkYTUxMTZlZGIyMTljZTZiYzc1OTA4YTdjZjRjYmMxMjgwNTllY2E3ZGFmNTM4YzBhMmU0NLEDezg=: 00:16:38.432 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:38.432 13:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:39.808 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDdlMDQyY2MyN2EyZmM0NjE2MDk2YzQ1NzAxYmE5ZjI2cdk4: 00:16:39.808 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjY4ZmJkNTIwNzhkYTUxMTZlZGIyMTljZTZiYzc1OTA4YTdjZjRjYmMxMjgwNTllY2E3ZGFmNTM4YzBhMmU0NLEDezg=: ]] 00:16:39.808 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjY4ZmJkNTIwNzhkYTUxMTZlZGIyMTljZTZiYzc1OTA4YTdjZjRjYmMxMjgwNTllY2E3ZGFmNTM4YzBhMmU0NLEDezg=: 00:16:39.809 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:16:39.809 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:39.809 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:39.809 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:39.809 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:39.809 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:39.809 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:39.809 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.809 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:39.809 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.809 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:39.809 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:39.809 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:39.809 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:39.809 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:39.809 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:39.809 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:39.809 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:39.809 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:39.809 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:39.809 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:39.809 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.809 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.809 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:40.067 nvme0n1 00:16:40.067 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.067 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:40.067 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.067 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:40.067 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:40.067 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.067 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.067 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:40.067 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.068 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:40.326 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.326 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:40.326 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:16:40.326 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:40.326 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:40.326 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:40.326 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:40.326 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTE1MjhmOWM1MjcyZTUzOGVlZDhkMDJkYjMwOTY1NzI5OWZlMGM4MmEzNGFlMGVhGwOoaQ==: 00:16:40.326 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2MwMjJhZjY4N2NmNmE3MmU0M2ZjNzNkMjg3MmRkNjQxM2ZkNzBhODcxNTkyOWEyQ4RbWA==: 00:16:40.326 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:40.326 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:40.326 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTE1MjhmOWM1MjcyZTUzOGVlZDhkMDJkYjMwOTY1NzI5OWZlMGM4MmEzNGFlMGVhGwOoaQ==: 00:16:40.326 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2MwMjJhZjY4N2NmNmE3MmU0M2ZjNzNkMjg3MmRkNjQxM2ZkNzBhODcxNTkyOWEyQ4RbWA==: ]] 00:16:40.326 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2MwMjJhZjY4N2NmNmE3MmU0M2ZjNzNkMjg3MmRkNjQxM2ZkNzBhODcxNTkyOWEyQ4RbWA==: 00:16:40.326 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:16:40.326 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:40.326 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:40.326 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:40.326 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:40.326 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:40.326 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:40.326 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.326 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:40.326 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.326 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:40.326 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:40.326 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:40.327 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:40.327 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:40.327 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:40.327 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:40.327 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:40.327 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:40.327 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:40.327 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:40.327 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.327 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.327 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:40.588 nvme0n1 00:16:40.588 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.588 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:40.588 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:40.588 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.588 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:40.588 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.588 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.588 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:40.588 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.588 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:40.588 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.588 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:40.588 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:16:40.588 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:40.588 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:40.588 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:40.588 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:40.588 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmI5MTYwZjIzOTc4NGQ3NmVmNGE1ZTVlZDdlNWFjNGQiiCma: 00:16:40.588 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTQzMmVlMTk1NTg0YzcwMjBjYzQ2Y2YxZDI5YWNlNTUCCDNS: 00:16:40.588 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:40.588 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:40.588 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmI5MTYwZjIzOTc4NGQ3NmVmNGE1ZTVlZDdlNWFjNGQiiCma: 00:16:40.588 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTQzMmVlMTk1NTg0YzcwMjBjYzQ2Y2YxZDI5YWNlNTUCCDNS: ]] 00:16:40.588 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTQzMmVlMTk1NTg0YzcwMjBjYzQ2Y2YxZDI5YWNlNTUCCDNS: 00:16:40.588 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:16:40.588 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:40.588 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:40.588 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:40.589 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:40.589 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:40.589 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:40.589 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.589 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:40.589 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.589 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:40.589 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:40.589 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:40.589 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:40.589 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:40.589 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:40.589 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:40.589 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:40.589 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:40.589 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:40.589 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:40.589 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:40.589 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.589 13:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:40.848 nvme0n1 00:16:40.848 13:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.848 13:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:40.848 13:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:40.848 13:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.848 13:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:40.848 13:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.107 13:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.107 13:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:41.107 13:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.107 13:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:41.107 13:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.107 13:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:41.107 13:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:16:41.107 13:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:41.107 13:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:41.107 13:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:41.107 13:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:41.107 13:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGFkNDM4YmRmNzkzMTYyMTU0MThhNTE4YWMyZWNjMzVmYTQ0ZTUzNGFhMWMyMjllsh5kQg==: 00:16:41.107 13:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTZiMzFhM2I2OWQ1YzNkODEyNTBjNGIwMjBiZjZhN2MnzkgE: 00:16:41.107 13:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:41.107 13:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:41.107 13:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGFkNDM4YmRmNzkzMTYyMTU0MThhNTE4YWMyZWNjMzVmYTQ0ZTUzNGFhMWMyMjllsh5kQg==: 00:16:41.107 13:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTZiMzFhM2I2OWQ1YzNkODEyNTBjNGIwMjBiZjZhN2MnzkgE: ]] 00:16:41.107 13:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTZiMzFhM2I2OWQ1YzNkODEyNTBjNGIwMjBiZjZhN2MnzkgE: 00:16:41.108 13:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:16:41.108 13:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:41.108 13:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:41.108 13:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:41.108 13:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:41.108 13:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:41.108 13:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:41.108 13:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.108 13:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:41.108 13:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.108 13:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:41.108 13:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:41.108 13:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:41.108 13:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:41.108 13:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:41.108 13:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:41.108 13:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:41.108 13:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:41.108 13:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:41.108 13:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:41.108 13:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:41.108 13:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:41.108 13:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.108 13:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:41.367 nvme0n1 00:16:41.367 13:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.367 13:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:41.367 13:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.367 13:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:41.367 13:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:41.367 13:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.367 13:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.367 13:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:41.367 13:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.367 13:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:41.367 13:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.367 13:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:41.367 13:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:16:41.367 13:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:41.367 13:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:41.367 13:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:41.367 13:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:41.367 13:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGRmOWZjNjRhNTkwYTMxNTUxM2QwZjJiNmVkOGRjYmUxZWRiZjM2MTIyZTM2Njk4MWQ0MzdkYThjMTBhMWE5M8Ml828=: 00:16:41.367 13:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:41.367 13:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:41.367 13:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:41.367 13:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGRmOWZjNjRhNTkwYTMxNTUxM2QwZjJiNmVkOGRjYmUxZWRiZjM2MTIyZTM2Njk4MWQ0MzdkYThjMTBhMWE5M8Ml828=: 00:16:41.367 13:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:41.367 13:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:16:41.367 13:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:41.367 13:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:41.367 13:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:41.367 13:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:41.367 13:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:41.367 13:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:41.367 13:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.367 13:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:41.367 13:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.367 13:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:41.367 13:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:41.367 13:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:41.367 13:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:41.367 13:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:41.367 13:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:41.367 13:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:41.367 13:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:41.367 13:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:41.367 13:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:41.367 13:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:41.367 13:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:41.367 13:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.367 13:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:41.626 nvme0n1 00:16:41.626 13:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.626 13:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:41.626 13:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:41.626 13:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.626 13:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:41.626 13:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.886 13:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.886 13:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:41.886 13:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.886 13:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:41.886 13:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.886 13:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:41.886 13:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:41.886 13:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:16:41.886 13:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:41.886 13:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:41.886 13:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:41.886 13:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:41.886 13:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDdlMDQyY2MyN2EyZmM0NjE2MDk2YzQ1NzAxYmE5ZjI2cdk4: 00:16:41.886 13:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjY4ZmJkNTIwNzhkYTUxMTZlZGIyMTljZTZiYzc1OTA4YTdjZjRjYmMxMjgwNTllY2E3ZGFmNTM4YzBhMmU0NLEDezg=: 00:16:41.886 13:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:41.886 13:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:41.886 13:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDdlMDQyY2MyN2EyZmM0NjE2MDk2YzQ1NzAxYmE5ZjI2cdk4: 00:16:41.886 13:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjY4ZmJkNTIwNzhkYTUxMTZlZGIyMTljZTZiYzc1OTA4YTdjZjRjYmMxMjgwNTllY2E3ZGFmNTM4YzBhMmU0NLEDezg=: ]] 00:16:41.886 13:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjY4ZmJkNTIwNzhkYTUxMTZlZGIyMTljZTZiYzc1OTA4YTdjZjRjYmMxMjgwNTllY2E3ZGFmNTM4YzBhMmU0NLEDezg=: 00:16:41.886 13:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:16:41.886 13:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:41.886 13:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:41.886 13:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:41.886 13:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:41.886 13:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:41.886 13:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:41.886 13:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.886 13:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:41.886 13:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.886 13:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:41.886 13:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:41.886 13:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:41.886 13:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:41.886 13:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:41.886 13:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:41.887 13:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:41.887 13:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:41.887 13:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:41.887 13:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:41.887 13:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:41.887 13:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.887 13:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.887 13:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.454 nvme0n1 00:16:42.454 13:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.454 13:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:42.454 13:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:42.454 13:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.454 13:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.454 13:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.454 13:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.454 13:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:42.454 13:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.454 13:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.454 13:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.454 13:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:42.454 13:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:16:42.454 13:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:42.454 13:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:42.454 13:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:42.454 13:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:42.454 13:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTE1MjhmOWM1MjcyZTUzOGVlZDhkMDJkYjMwOTY1NzI5OWZlMGM4MmEzNGFlMGVhGwOoaQ==: 00:16:42.454 13:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2MwMjJhZjY4N2NmNmE3MmU0M2ZjNzNkMjg3MmRkNjQxM2ZkNzBhODcxNTkyOWEyQ4RbWA==: 00:16:42.454 13:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:42.454 13:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:42.454 13:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTE1MjhmOWM1MjcyZTUzOGVlZDhkMDJkYjMwOTY1NzI5OWZlMGM4MmEzNGFlMGVhGwOoaQ==: 00:16:42.454 13:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2MwMjJhZjY4N2NmNmE3MmU0M2ZjNzNkMjg3MmRkNjQxM2ZkNzBhODcxNTkyOWEyQ4RbWA==: ]] 00:16:42.454 13:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2MwMjJhZjY4N2NmNmE3MmU0M2ZjNzNkMjg3MmRkNjQxM2ZkNzBhODcxNTkyOWEyQ4RbWA==: 00:16:42.454 13:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:16:42.455 13:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:42.455 13:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:42.455 13:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:42.455 13:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:42.455 13:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:42.455 13:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:42.455 13:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.455 13:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.455 13:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.455 13:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:42.455 13:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:42.455 13:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:42.455 13:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:42.455 13:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:42.455 13:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:42.455 13:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:42.455 13:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:42.455 13:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:42.455 13:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:42.455 13:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:42.455 13:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.455 13:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.455 13:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:43.021 nvme0n1 00:16:43.021 13:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.021 13:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:43.021 13:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:43.021 13:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.021 13:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:43.021 13:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.021 13:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.021 13:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:43.021 13:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.021 13:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:43.021 13:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.021 13:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:43.021 13:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:16:43.021 13:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:43.021 13:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:43.021 13:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:43.021 13:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:43.021 13:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmI5MTYwZjIzOTc4NGQ3NmVmNGE1ZTVlZDdlNWFjNGQiiCma: 00:16:43.021 13:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTQzMmVlMTk1NTg0YzcwMjBjYzQ2Y2YxZDI5YWNlNTUCCDNS: 00:16:43.021 13:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:43.021 13:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:43.021 13:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmI5MTYwZjIzOTc4NGQ3NmVmNGE1ZTVlZDdlNWFjNGQiiCma: 00:16:43.021 13:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTQzMmVlMTk1NTg0YzcwMjBjYzQ2Y2YxZDI5YWNlNTUCCDNS: ]] 00:16:43.021 13:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTQzMmVlMTk1NTg0YzcwMjBjYzQ2Y2YxZDI5YWNlNTUCCDNS: 00:16:43.021 13:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:16:43.021 13:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:43.021 13:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:43.021 13:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:43.021 13:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:43.021 13:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:43.021 13:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:43.021 13:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.021 13:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:43.022 13:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.022 13:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:43.022 13:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:43.022 13:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:43.022 13:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:43.022 13:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:43.022 13:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:43.022 13:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:43.022 13:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:43.022 13:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:43.022 13:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:43.022 13:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:43.022 13:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.022 13:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.022 13:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:43.588 nvme0n1 00:16:43.588 13:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.588 13:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:43.588 13:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.588 13:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:43.588 13:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:43.588 13:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.588 13:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.588 13:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:43.588 13:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.588 13:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:43.588 13:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.588 13:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:43.588 13:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:16:43.588 13:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:43.588 13:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:43.588 13:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:43.588 13:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:43.588 13:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGFkNDM4YmRmNzkzMTYyMTU0MThhNTE4YWMyZWNjMzVmYTQ0ZTUzNGFhMWMyMjllsh5kQg==: 00:16:43.588 13:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTZiMzFhM2I2OWQ1YzNkODEyNTBjNGIwMjBiZjZhN2MnzkgE: 00:16:43.588 13:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:43.588 13:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:43.588 13:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGFkNDM4YmRmNzkzMTYyMTU0MThhNTE4YWMyZWNjMzVmYTQ0ZTUzNGFhMWMyMjllsh5kQg==: 00:16:43.588 13:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTZiMzFhM2I2OWQ1YzNkODEyNTBjNGIwMjBiZjZhN2MnzkgE: ]] 00:16:43.588 13:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTZiMzFhM2I2OWQ1YzNkODEyNTBjNGIwMjBiZjZhN2MnzkgE: 00:16:43.588 13:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:16:43.588 13:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:43.588 13:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:43.588 13:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:43.588 13:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:43.588 13:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:43.588 13:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:43.588 13:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.588 13:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:43.588 13:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.588 13:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:43.588 13:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:43.588 13:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:43.588 13:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:43.588 13:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:43.588 13:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:43.588 13:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:43.588 13:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:43.588 13:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:43.588 13:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:43.588 13:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:43.588 13:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:43.588 13:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.588 13:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.155 nvme0n1 00:16:44.155 13:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.155 13:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:44.155 13:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:44.155 13:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.155 13:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.155 13:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.155 13:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.155 13:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:44.155 13:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.155 13:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.155 13:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.155 13:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:44.155 13:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:16:44.155 13:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:44.155 13:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:44.155 13:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:44.155 13:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:44.155 13:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGRmOWZjNjRhNTkwYTMxNTUxM2QwZjJiNmVkOGRjYmUxZWRiZjM2MTIyZTM2Njk4MWQ0MzdkYThjMTBhMWE5M8Ml828=: 00:16:44.155 13:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:44.155 13:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:44.155 13:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:44.155 13:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGRmOWZjNjRhNTkwYTMxNTUxM2QwZjJiNmVkOGRjYmUxZWRiZjM2MTIyZTM2Njk4MWQ0MzdkYThjMTBhMWE5M8Ml828=: 00:16:44.155 13:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:44.155 13:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:16:44.155 13:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:44.155 13:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:44.155 13:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:44.155 13:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:44.155 13:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:44.155 13:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:44.155 13:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.155 13:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.155 13:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.155 13:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:44.155 13:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:44.155 13:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:44.155 13:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:44.155 13:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:44.155 13:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:44.155 13:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:44.155 13:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:44.155 13:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:44.155 13:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:44.155 13:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:44.155 13:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:44.155 13:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.155 13:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.722 nvme0n1 00:16:44.722 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.722 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:44.722 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:44.722 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.722 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.722 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.723 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.723 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:44.723 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.723 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.723 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.723 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:16:44.723 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:44.723 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:44.723 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:16:44.723 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:44.723 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:44.723 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:44.723 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:44.723 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDdlMDQyY2MyN2EyZmM0NjE2MDk2YzQ1NzAxYmE5ZjI2cdk4: 00:16:44.723 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjY4ZmJkNTIwNzhkYTUxMTZlZGIyMTljZTZiYzc1OTA4YTdjZjRjYmMxMjgwNTllY2E3ZGFmNTM4YzBhMmU0NLEDezg=: 00:16:44.723 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:44.723 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:44.723 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDdlMDQyY2MyN2EyZmM0NjE2MDk2YzQ1NzAxYmE5ZjI2cdk4: 00:16:44.723 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjY4ZmJkNTIwNzhkYTUxMTZlZGIyMTljZTZiYzc1OTA4YTdjZjRjYmMxMjgwNTllY2E3ZGFmNTM4YzBhMmU0NLEDezg=: ]] 00:16:44.723 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjY4ZmJkNTIwNzhkYTUxMTZlZGIyMTljZTZiYzc1OTA4YTdjZjRjYmMxMjgwNTllY2E3ZGFmNTM4YzBhMmU0NLEDezg=: 00:16:44.723 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:16:44.723 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:44.982 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:44.982 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:44.982 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:44.982 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:44.982 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:44.982 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.982 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.982 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.982 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:44.982 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:44.982 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:44.982 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:44.982 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:44.982 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:44.982 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:44.982 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:44.982 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:44.982 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:44.982 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:44.982 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:44.982 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.982 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.982 nvme0n1 00:16:44.982 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.982 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:44.982 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.982 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.982 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:44.982 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.982 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.982 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:44.982 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.982 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.982 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.982 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:44.982 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:16:44.982 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:44.982 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:44.982 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:44.982 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:44.982 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTE1MjhmOWM1MjcyZTUzOGVlZDhkMDJkYjMwOTY1NzI5OWZlMGM4MmEzNGFlMGVhGwOoaQ==: 00:16:44.982 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2MwMjJhZjY4N2NmNmE3MmU0M2ZjNzNkMjg3MmRkNjQxM2ZkNzBhODcxNTkyOWEyQ4RbWA==: 00:16:44.982 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:44.982 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:44.982 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTE1MjhmOWM1MjcyZTUzOGVlZDhkMDJkYjMwOTY1NzI5OWZlMGM4MmEzNGFlMGVhGwOoaQ==: 00:16:44.982 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2MwMjJhZjY4N2NmNmE3MmU0M2ZjNzNkMjg3MmRkNjQxM2ZkNzBhODcxNTkyOWEyQ4RbWA==: ]] 00:16:44.982 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2MwMjJhZjY4N2NmNmE3MmU0M2ZjNzNkMjg3MmRkNjQxM2ZkNzBhODcxNTkyOWEyQ4RbWA==: 00:16:44.982 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:16:44.982 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:44.982 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:44.982 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:44.982 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:44.982 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:44.982 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:44.982 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.982 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.982 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.982 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:44.982 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:44.982 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:44.982 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:44.982 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:44.982 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:44.982 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:44.982 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:44.982 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:44.982 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:44.982 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:44.982 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:44.982 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.982 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.242 nvme0n1 00:16:45.242 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.242 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:45.242 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.242 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.242 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:45.242 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.242 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.242 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:45.242 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.242 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.242 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.242 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:45.242 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:16:45.242 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:45.242 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:45.242 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:45.242 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:45.242 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmI5MTYwZjIzOTc4NGQ3NmVmNGE1ZTVlZDdlNWFjNGQiiCma: 00:16:45.242 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTQzMmVlMTk1NTg0YzcwMjBjYzQ2Y2YxZDI5YWNlNTUCCDNS: 00:16:45.242 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:45.242 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:45.242 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmI5MTYwZjIzOTc4NGQ3NmVmNGE1ZTVlZDdlNWFjNGQiiCma: 00:16:45.242 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTQzMmVlMTk1NTg0YzcwMjBjYzQ2Y2YxZDI5YWNlNTUCCDNS: ]] 00:16:45.242 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTQzMmVlMTk1NTg0YzcwMjBjYzQ2Y2YxZDI5YWNlNTUCCDNS: 00:16:45.242 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:16:45.242 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:45.242 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:45.242 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:45.242 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:45.242 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:45.242 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:45.242 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.242 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.242 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.242 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:45.242 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:45.242 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:45.242 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:45.242 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:45.242 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:45.242 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:45.242 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:45.242 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:45.242 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:45.242 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:45.242 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:45.242 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.242 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.242 nvme0n1 00:16:45.242 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.242 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:45.242 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.242 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:45.242 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.242 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.501 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.501 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:45.501 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.502 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.502 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.502 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:45.502 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:16:45.502 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:45.502 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:45.502 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:45.502 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:45.502 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGFkNDM4YmRmNzkzMTYyMTU0MThhNTE4YWMyZWNjMzVmYTQ0ZTUzNGFhMWMyMjllsh5kQg==: 00:16:45.502 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTZiMzFhM2I2OWQ1YzNkODEyNTBjNGIwMjBiZjZhN2MnzkgE: 00:16:45.502 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:45.502 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:45.502 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGFkNDM4YmRmNzkzMTYyMTU0MThhNTE4YWMyZWNjMzVmYTQ0ZTUzNGFhMWMyMjllsh5kQg==: 00:16:45.502 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTZiMzFhM2I2OWQ1YzNkODEyNTBjNGIwMjBiZjZhN2MnzkgE: ]] 00:16:45.502 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTZiMzFhM2I2OWQ1YzNkODEyNTBjNGIwMjBiZjZhN2MnzkgE: 00:16:45.502 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:16:45.502 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:45.502 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:45.502 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:45.502 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:45.502 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:45.502 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:45.502 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.502 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.502 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.502 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:45.502 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:45.502 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:45.502 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:45.502 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:45.502 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:45.502 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:45.502 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:45.502 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:45.502 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:45.502 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:45.502 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:45.502 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.502 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.502 nvme0n1 00:16:45.502 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.502 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:45.502 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:45.502 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.502 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.502 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.502 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.502 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:45.502 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.502 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.502 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.502 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:45.502 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:16:45.502 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:45.502 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:45.502 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:45.502 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:45.502 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGRmOWZjNjRhNTkwYTMxNTUxM2QwZjJiNmVkOGRjYmUxZWRiZjM2MTIyZTM2Njk4MWQ0MzdkYThjMTBhMWE5M8Ml828=: 00:16:45.502 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:45.502 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:45.502 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:45.502 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGRmOWZjNjRhNTkwYTMxNTUxM2QwZjJiNmVkOGRjYmUxZWRiZjM2MTIyZTM2Njk4MWQ0MzdkYThjMTBhMWE5M8Ml828=: 00:16:45.502 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:45.502 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:16:45.502 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:45.502 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:45.502 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:45.502 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:45.502 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:45.502 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:45.502 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.502 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.502 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.502 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:45.502 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:45.502 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:45.502 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:45.502 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:45.502 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:45.502 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:45.502 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:45.502 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:45.502 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:45.502 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:45.502 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:45.502 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.502 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.761 nvme0n1 00:16:45.761 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.761 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:45.762 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:45.762 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.762 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.762 13:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.762 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.762 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:45.762 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.762 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.762 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.762 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:45.762 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:45.762 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:16:45.762 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:45.762 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:45.762 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:45.762 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:45.762 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDdlMDQyY2MyN2EyZmM0NjE2MDk2YzQ1NzAxYmE5ZjI2cdk4: 00:16:45.762 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjY4ZmJkNTIwNzhkYTUxMTZlZGIyMTljZTZiYzc1OTA4YTdjZjRjYmMxMjgwNTllY2E3ZGFmNTM4YzBhMmU0NLEDezg=: 00:16:45.762 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:45.762 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:45.762 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDdlMDQyY2MyN2EyZmM0NjE2MDk2YzQ1NzAxYmE5ZjI2cdk4: 00:16:45.762 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjY4ZmJkNTIwNzhkYTUxMTZlZGIyMTljZTZiYzc1OTA4YTdjZjRjYmMxMjgwNTllY2E3ZGFmNTM4YzBhMmU0NLEDezg=: ]] 00:16:45.762 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjY4ZmJkNTIwNzhkYTUxMTZlZGIyMTljZTZiYzc1OTA4YTdjZjRjYmMxMjgwNTllY2E3ZGFmNTM4YzBhMmU0NLEDezg=: 00:16:45.762 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:16:45.762 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:45.762 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:45.762 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:45.762 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:45.762 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:45.762 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:45.762 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.762 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.762 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.762 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:45.762 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:45.762 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:45.762 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:45.762 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:45.762 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:45.762 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:45.762 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:45.762 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:45.762 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:45.762 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:45.762 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:45.762 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.762 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.020 nvme0n1 00:16:46.020 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.020 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:46.021 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:46.021 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.021 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.021 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.021 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.021 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:46.021 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.021 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.021 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.021 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:46.021 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:16:46.021 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:46.021 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:46.021 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:46.021 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:46.021 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTE1MjhmOWM1MjcyZTUzOGVlZDhkMDJkYjMwOTY1NzI5OWZlMGM4MmEzNGFlMGVhGwOoaQ==: 00:16:46.021 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2MwMjJhZjY4N2NmNmE3MmU0M2ZjNzNkMjg3MmRkNjQxM2ZkNzBhODcxNTkyOWEyQ4RbWA==: 00:16:46.021 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:46.021 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:46.021 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTE1MjhmOWM1MjcyZTUzOGVlZDhkMDJkYjMwOTY1NzI5OWZlMGM4MmEzNGFlMGVhGwOoaQ==: 00:16:46.021 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2MwMjJhZjY4N2NmNmE3MmU0M2ZjNzNkMjg3MmRkNjQxM2ZkNzBhODcxNTkyOWEyQ4RbWA==: ]] 00:16:46.021 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2MwMjJhZjY4N2NmNmE3MmU0M2ZjNzNkMjg3MmRkNjQxM2ZkNzBhODcxNTkyOWEyQ4RbWA==: 00:16:46.021 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:16:46.021 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:46.021 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:46.021 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:46.021 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:46.021 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:46.021 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:46.021 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.021 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.021 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.021 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:46.021 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:46.021 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:46.021 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:46.021 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:46.021 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:46.021 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:46.021 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:46.021 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:46.021 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:46.021 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:46.021 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:46.021 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.021 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.021 nvme0n1 00:16:46.021 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.021 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:46.021 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:46.021 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.021 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.021 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.280 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.280 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:46.280 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.280 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.280 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.280 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:46.280 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:16:46.280 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:46.280 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:46.280 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:46.280 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:46.280 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmI5MTYwZjIzOTc4NGQ3NmVmNGE1ZTVlZDdlNWFjNGQiiCma: 00:16:46.280 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTQzMmVlMTk1NTg0YzcwMjBjYzQ2Y2YxZDI5YWNlNTUCCDNS: 00:16:46.280 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:46.280 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:46.280 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmI5MTYwZjIzOTc4NGQ3NmVmNGE1ZTVlZDdlNWFjNGQiiCma: 00:16:46.280 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTQzMmVlMTk1NTg0YzcwMjBjYzQ2Y2YxZDI5YWNlNTUCCDNS: ]] 00:16:46.280 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTQzMmVlMTk1NTg0YzcwMjBjYzQ2Y2YxZDI5YWNlNTUCCDNS: 00:16:46.280 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:16:46.280 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:46.280 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:46.280 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:46.280 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:46.280 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:46.280 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:46.280 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.280 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.280 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.280 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:46.280 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:46.280 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:46.280 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:46.280 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:46.280 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:46.280 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:46.280 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:46.280 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:46.280 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:46.280 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:46.280 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:46.280 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.280 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.280 nvme0n1 00:16:46.280 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.280 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:46.280 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:46.280 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.280 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.280 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.280 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.280 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:46.280 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.280 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.280 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.280 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:46.280 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:16:46.280 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:46.280 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:46.280 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:46.280 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:46.280 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGFkNDM4YmRmNzkzMTYyMTU0MThhNTE4YWMyZWNjMzVmYTQ0ZTUzNGFhMWMyMjllsh5kQg==: 00:16:46.280 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTZiMzFhM2I2OWQ1YzNkODEyNTBjNGIwMjBiZjZhN2MnzkgE: 00:16:46.280 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:46.280 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:46.280 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGFkNDM4YmRmNzkzMTYyMTU0MThhNTE4YWMyZWNjMzVmYTQ0ZTUzNGFhMWMyMjllsh5kQg==: 00:16:46.280 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTZiMzFhM2I2OWQ1YzNkODEyNTBjNGIwMjBiZjZhN2MnzkgE: ]] 00:16:46.280 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTZiMzFhM2I2OWQ1YzNkODEyNTBjNGIwMjBiZjZhN2MnzkgE: 00:16:46.280 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:16:46.280 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:46.280 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:46.280 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:46.280 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:46.281 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:46.281 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:46.281 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.281 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.281 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.281 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:46.281 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:46.281 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:46.281 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:46.281 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:46.281 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:46.281 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:46.281 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:46.281 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:46.281 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:46.281 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:46.281 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:46.281 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.281 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.539 nvme0n1 00:16:46.539 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.539 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:46.539 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:46.539 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.539 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.539 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.539 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.539 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:46.539 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.539 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.539 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.539 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:46.539 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:16:46.539 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:46.539 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:46.539 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:46.539 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:46.539 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGRmOWZjNjRhNTkwYTMxNTUxM2QwZjJiNmVkOGRjYmUxZWRiZjM2MTIyZTM2Njk4MWQ0MzdkYThjMTBhMWE5M8Ml828=: 00:16:46.539 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:46.539 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:46.539 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:46.539 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGRmOWZjNjRhNTkwYTMxNTUxM2QwZjJiNmVkOGRjYmUxZWRiZjM2MTIyZTM2Njk4MWQ0MzdkYThjMTBhMWE5M8Ml828=: 00:16:46.539 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:46.539 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:16:46.539 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:46.539 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:46.539 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:46.539 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:46.539 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:46.539 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:46.539 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.539 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.539 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.539 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:46.539 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:46.539 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:46.539 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:46.539 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:46.539 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:46.539 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:46.539 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:46.539 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:46.539 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:46.539 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:46.539 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:46.539 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.539 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.797 nvme0n1 00:16:46.797 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.797 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:46.797 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:46.797 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.797 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.797 13:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.797 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.797 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:46.797 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.797 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.797 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.797 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:46.797 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:46.797 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:16:46.797 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:46.797 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:46.797 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:46.797 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:46.797 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDdlMDQyY2MyN2EyZmM0NjE2MDk2YzQ1NzAxYmE5ZjI2cdk4: 00:16:46.797 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjY4ZmJkNTIwNzhkYTUxMTZlZGIyMTljZTZiYzc1OTA4YTdjZjRjYmMxMjgwNTllY2E3ZGFmNTM4YzBhMmU0NLEDezg=: 00:16:46.798 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:46.798 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:46.798 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDdlMDQyY2MyN2EyZmM0NjE2MDk2YzQ1NzAxYmE5ZjI2cdk4: 00:16:46.798 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjY4ZmJkNTIwNzhkYTUxMTZlZGIyMTljZTZiYzc1OTA4YTdjZjRjYmMxMjgwNTllY2E3ZGFmNTM4YzBhMmU0NLEDezg=: ]] 00:16:46.798 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjY4ZmJkNTIwNzhkYTUxMTZlZGIyMTljZTZiYzc1OTA4YTdjZjRjYmMxMjgwNTllY2E3ZGFmNTM4YzBhMmU0NLEDezg=: 00:16:46.798 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:16:46.798 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:46.798 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:46.798 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:46.798 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:46.798 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:46.798 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:46.798 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.798 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.798 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.798 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:46.798 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:46.798 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:46.798 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:46.798 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:46.798 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:46.798 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:46.798 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:46.798 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:46.798 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:46.798 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:46.798 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.798 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.798 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.056 nvme0n1 00:16:47.056 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.056 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:47.056 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:47.056 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.056 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.056 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.056 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.056 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:47.056 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.056 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.056 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.056 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:47.056 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:16:47.056 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:47.056 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:47.056 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:47.056 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:47.056 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTE1MjhmOWM1MjcyZTUzOGVlZDhkMDJkYjMwOTY1NzI5OWZlMGM4MmEzNGFlMGVhGwOoaQ==: 00:16:47.056 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2MwMjJhZjY4N2NmNmE3MmU0M2ZjNzNkMjg3MmRkNjQxM2ZkNzBhODcxNTkyOWEyQ4RbWA==: 00:16:47.056 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:47.056 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:47.056 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTE1MjhmOWM1MjcyZTUzOGVlZDhkMDJkYjMwOTY1NzI5OWZlMGM4MmEzNGFlMGVhGwOoaQ==: 00:16:47.056 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2MwMjJhZjY4N2NmNmE3MmU0M2ZjNzNkMjg3MmRkNjQxM2ZkNzBhODcxNTkyOWEyQ4RbWA==: ]] 00:16:47.056 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2MwMjJhZjY4N2NmNmE3MmU0M2ZjNzNkMjg3MmRkNjQxM2ZkNzBhODcxNTkyOWEyQ4RbWA==: 00:16:47.056 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:16:47.056 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:47.056 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:47.056 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:47.056 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:47.056 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:47.056 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:47.056 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.056 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.056 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.056 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:47.056 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:47.056 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:47.056 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:47.056 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:47.057 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:47.057 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:47.057 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:47.057 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:47.057 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:47.057 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:47.057 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:47.057 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.057 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.315 nvme0n1 00:16:47.316 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.316 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:47.316 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:47.316 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.316 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.316 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.316 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.316 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:47.316 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.316 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.316 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.316 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:47.316 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:16:47.316 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:47.316 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:47.316 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:47.316 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:47.316 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmI5MTYwZjIzOTc4NGQ3NmVmNGE1ZTVlZDdlNWFjNGQiiCma: 00:16:47.316 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTQzMmVlMTk1NTg0YzcwMjBjYzQ2Y2YxZDI5YWNlNTUCCDNS: 00:16:47.316 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:47.316 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:47.316 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmI5MTYwZjIzOTc4NGQ3NmVmNGE1ZTVlZDdlNWFjNGQiiCma: 00:16:47.316 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTQzMmVlMTk1NTg0YzcwMjBjYzQ2Y2YxZDI5YWNlNTUCCDNS: ]] 00:16:47.316 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTQzMmVlMTk1NTg0YzcwMjBjYzQ2Y2YxZDI5YWNlNTUCCDNS: 00:16:47.316 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:16:47.316 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:47.316 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:47.316 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:47.316 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:47.316 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:47.316 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:47.316 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.316 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.316 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.316 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:47.316 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:47.316 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:47.316 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:47.316 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:47.316 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:47.316 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:47.316 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:47.316 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:47.316 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:47.316 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:47.316 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:47.316 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.316 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.575 nvme0n1 00:16:47.575 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.575 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:47.575 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.575 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:47.575 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.575 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.575 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.575 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:47.575 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.575 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.575 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.575 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:47.575 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:16:47.575 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:47.575 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:47.575 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:47.575 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:47.575 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGFkNDM4YmRmNzkzMTYyMTU0MThhNTE4YWMyZWNjMzVmYTQ0ZTUzNGFhMWMyMjllsh5kQg==: 00:16:47.575 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTZiMzFhM2I2OWQ1YzNkODEyNTBjNGIwMjBiZjZhN2MnzkgE: 00:16:47.575 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:47.575 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:47.575 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGFkNDM4YmRmNzkzMTYyMTU0MThhNTE4YWMyZWNjMzVmYTQ0ZTUzNGFhMWMyMjllsh5kQg==: 00:16:47.575 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTZiMzFhM2I2OWQ1YzNkODEyNTBjNGIwMjBiZjZhN2MnzkgE: ]] 00:16:47.575 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTZiMzFhM2I2OWQ1YzNkODEyNTBjNGIwMjBiZjZhN2MnzkgE: 00:16:47.575 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:16:47.575 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:47.575 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:47.575 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:47.575 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:47.575 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:47.575 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:47.575 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.575 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.575 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.575 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:47.575 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:47.575 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:47.576 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:47.576 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:47.576 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:47.576 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:47.576 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:47.576 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:47.576 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:47.576 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:47.576 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:47.576 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.576 13:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.835 nvme0n1 00:16:47.835 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.835 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:47.835 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:47.835 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.835 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.835 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.835 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.835 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:47.835 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.835 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.835 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.835 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:47.835 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:16:47.835 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:47.835 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:47.835 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:47.835 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:47.835 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGRmOWZjNjRhNTkwYTMxNTUxM2QwZjJiNmVkOGRjYmUxZWRiZjM2MTIyZTM2Njk4MWQ0MzdkYThjMTBhMWE5M8Ml828=: 00:16:47.835 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:47.835 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:47.835 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:47.835 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGRmOWZjNjRhNTkwYTMxNTUxM2QwZjJiNmVkOGRjYmUxZWRiZjM2MTIyZTM2Njk4MWQ0MzdkYThjMTBhMWE5M8Ml828=: 00:16:47.835 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:47.835 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:16:47.835 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:47.835 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:47.835 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:47.835 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:47.835 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:47.835 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:47.835 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.835 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.835 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.835 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:47.835 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:47.835 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:47.835 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:47.835 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:47.835 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:47.835 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:47.835 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:47.835 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:47.835 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:47.835 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:47.835 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:47.835 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.835 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.095 nvme0n1 00:16:48.095 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.095 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:48.095 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.095 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.095 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:48.095 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.095 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.095 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:48.095 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.095 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.095 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.095 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:48.095 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:48.095 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:16:48.095 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:48.095 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:48.095 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:48.095 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:48.096 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDdlMDQyY2MyN2EyZmM0NjE2MDk2YzQ1NzAxYmE5ZjI2cdk4: 00:16:48.096 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjY4ZmJkNTIwNzhkYTUxMTZlZGIyMTljZTZiYzc1OTA4YTdjZjRjYmMxMjgwNTllY2E3ZGFmNTM4YzBhMmU0NLEDezg=: 00:16:48.096 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:48.096 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:48.096 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDdlMDQyY2MyN2EyZmM0NjE2MDk2YzQ1NzAxYmE5ZjI2cdk4: 00:16:48.096 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjY4ZmJkNTIwNzhkYTUxMTZlZGIyMTljZTZiYzc1OTA4YTdjZjRjYmMxMjgwNTllY2E3ZGFmNTM4YzBhMmU0NLEDezg=: ]] 00:16:48.096 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjY4ZmJkNTIwNzhkYTUxMTZlZGIyMTljZTZiYzc1OTA4YTdjZjRjYmMxMjgwNTllY2E3ZGFmNTM4YzBhMmU0NLEDezg=: 00:16:48.096 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:16:48.096 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:48.096 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:48.096 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:48.096 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:48.096 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:48.096 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:48.096 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.096 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.096 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.096 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:48.096 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:48.096 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:48.096 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:48.096 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:48.096 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:48.096 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:48.096 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:48.096 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:48.096 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:48.096 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:48.096 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.096 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.096 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.355 nvme0n1 00:16:48.355 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.355 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:48.355 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.355 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:48.355 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.355 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.614 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.614 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:48.614 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.614 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.614 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.614 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:48.614 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:16:48.614 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:48.614 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:48.614 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:48.614 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:48.614 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTE1MjhmOWM1MjcyZTUzOGVlZDhkMDJkYjMwOTY1NzI5OWZlMGM4MmEzNGFlMGVhGwOoaQ==: 00:16:48.614 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2MwMjJhZjY4N2NmNmE3MmU0M2ZjNzNkMjg3MmRkNjQxM2ZkNzBhODcxNTkyOWEyQ4RbWA==: 00:16:48.614 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:48.614 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:48.614 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTE1MjhmOWM1MjcyZTUzOGVlZDhkMDJkYjMwOTY1NzI5OWZlMGM4MmEzNGFlMGVhGwOoaQ==: 00:16:48.614 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2MwMjJhZjY4N2NmNmE3MmU0M2ZjNzNkMjg3MmRkNjQxM2ZkNzBhODcxNTkyOWEyQ4RbWA==: ]] 00:16:48.614 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2MwMjJhZjY4N2NmNmE3MmU0M2ZjNzNkMjg3MmRkNjQxM2ZkNzBhODcxNTkyOWEyQ4RbWA==: 00:16:48.614 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:16:48.614 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:48.614 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:48.614 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:48.614 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:48.614 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:48.614 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:48.614 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.614 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.614 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.614 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:48.614 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:48.614 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:48.614 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:48.614 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:48.615 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:48.615 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:48.615 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:48.615 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:48.615 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:48.615 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:48.615 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.615 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.615 13:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.873 nvme0n1 00:16:48.873 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.873 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:48.873 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:48.873 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.873 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.873 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.874 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.874 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:48.874 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.874 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.874 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.874 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:48.874 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:16:48.874 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:48.874 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:48.874 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:48.874 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:48.874 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmI5MTYwZjIzOTc4NGQ3NmVmNGE1ZTVlZDdlNWFjNGQiiCma: 00:16:48.874 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTQzMmVlMTk1NTg0YzcwMjBjYzQ2Y2YxZDI5YWNlNTUCCDNS: 00:16:48.874 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:48.874 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:48.874 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmI5MTYwZjIzOTc4NGQ3NmVmNGE1ZTVlZDdlNWFjNGQiiCma: 00:16:48.874 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTQzMmVlMTk1NTg0YzcwMjBjYzQ2Y2YxZDI5YWNlNTUCCDNS: ]] 00:16:48.874 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTQzMmVlMTk1NTg0YzcwMjBjYzQ2Y2YxZDI5YWNlNTUCCDNS: 00:16:48.874 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:16:48.874 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:48.874 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:48.874 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:48.874 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:48.874 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:48.874 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:48.874 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.874 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.874 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.874 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:48.874 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:48.874 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:48.874 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:48.874 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:48.874 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:48.874 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:48.874 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:48.874 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:48.874 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:48.874 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:48.874 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:48.874 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.874 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.165 nvme0n1 00:16:49.165 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.165 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:49.165 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:49.165 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.165 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.165 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.424 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.424 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:49.424 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.424 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.424 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.424 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:49.424 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:16:49.424 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:49.424 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:49.424 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:49.424 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:49.424 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGFkNDM4YmRmNzkzMTYyMTU0MThhNTE4YWMyZWNjMzVmYTQ0ZTUzNGFhMWMyMjllsh5kQg==: 00:16:49.424 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTZiMzFhM2I2OWQ1YzNkODEyNTBjNGIwMjBiZjZhN2MnzkgE: 00:16:49.424 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:49.424 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:49.424 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGFkNDM4YmRmNzkzMTYyMTU0MThhNTE4YWMyZWNjMzVmYTQ0ZTUzNGFhMWMyMjllsh5kQg==: 00:16:49.424 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTZiMzFhM2I2OWQ1YzNkODEyNTBjNGIwMjBiZjZhN2MnzkgE: ]] 00:16:49.424 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTZiMzFhM2I2OWQ1YzNkODEyNTBjNGIwMjBiZjZhN2MnzkgE: 00:16:49.424 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:16:49.424 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:49.424 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:49.424 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:49.424 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:49.424 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:49.424 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:49.424 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.424 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.424 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.424 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:49.424 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:49.424 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:49.424 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:49.424 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:49.424 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:49.424 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:49.424 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:49.424 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:49.424 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:49.424 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:49.424 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:49.424 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.424 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.686 nvme0n1 00:16:49.686 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.686 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:49.686 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:49.686 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.686 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.686 13:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.686 13:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.686 13:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:49.686 13:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.686 13:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.686 13:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.686 13:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:49.686 13:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:16:49.686 13:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:49.686 13:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:49.686 13:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:49.686 13:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:49.686 13:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGRmOWZjNjRhNTkwYTMxNTUxM2QwZjJiNmVkOGRjYmUxZWRiZjM2MTIyZTM2Njk4MWQ0MzdkYThjMTBhMWE5M8Ml828=: 00:16:49.686 13:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:49.686 13:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:49.686 13:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:49.686 13:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGRmOWZjNjRhNTkwYTMxNTUxM2QwZjJiNmVkOGRjYmUxZWRiZjM2MTIyZTM2Njk4MWQ0MzdkYThjMTBhMWE5M8Ml828=: 00:16:49.686 13:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:49.686 13:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:16:49.686 13:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:49.686 13:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:49.686 13:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:49.686 13:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:49.686 13:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:49.686 13:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:49.686 13:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.686 13:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.686 13:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.686 13:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:49.686 13:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:49.686 13:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:49.686 13:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:49.686 13:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:49.686 13:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:49.686 13:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:49.686 13:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:49.686 13:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:49.687 13:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:49.687 13:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:49.687 13:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:49.687 13:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.687 13:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.257 nvme0n1 00:16:50.257 13:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.257 13:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:50.257 13:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.257 13:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:50.257 13:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.257 13:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.257 13:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.257 13:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:50.257 13:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.257 13:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.257 13:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.257 13:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:50.257 13:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:50.257 13:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:16:50.257 13:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:50.257 13:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:50.257 13:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:50.257 13:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:50.257 13:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDdlMDQyY2MyN2EyZmM0NjE2MDk2YzQ1NzAxYmE5ZjI2cdk4: 00:16:50.257 13:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjY4ZmJkNTIwNzhkYTUxMTZlZGIyMTljZTZiYzc1OTA4YTdjZjRjYmMxMjgwNTllY2E3ZGFmNTM4YzBhMmU0NLEDezg=: 00:16:50.257 13:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:50.257 13:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:50.257 13:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDdlMDQyY2MyN2EyZmM0NjE2MDk2YzQ1NzAxYmE5ZjI2cdk4: 00:16:50.257 13:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjY4ZmJkNTIwNzhkYTUxMTZlZGIyMTljZTZiYzc1OTA4YTdjZjRjYmMxMjgwNTllY2E3ZGFmNTM4YzBhMmU0NLEDezg=: ]] 00:16:50.257 13:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjY4ZmJkNTIwNzhkYTUxMTZlZGIyMTljZTZiYzc1OTA4YTdjZjRjYmMxMjgwNTllY2E3ZGFmNTM4YzBhMmU0NLEDezg=: 00:16:50.257 13:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:16:50.257 13:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:50.257 13:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:50.257 13:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:50.257 13:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:50.257 13:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:50.257 13:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:50.257 13:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.257 13:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.257 13:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.257 13:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:50.257 13:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:50.257 13:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:50.257 13:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:50.257 13:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:50.258 13:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:50.258 13:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:50.258 13:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:50.258 13:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:50.258 13:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:50.258 13:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:50.258 13:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:50.258 13:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.258 13:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.824 nvme0n1 00:16:50.824 13:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.824 13:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:50.824 13:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:50.824 13:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.824 13:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.824 13:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.824 13:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.824 13:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:50.824 13:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.824 13:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.824 13:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.824 13:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:50.824 13:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:16:50.824 13:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:50.824 13:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:50.824 13:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:50.824 13:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:50.824 13:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTE1MjhmOWM1MjcyZTUzOGVlZDhkMDJkYjMwOTY1NzI5OWZlMGM4MmEzNGFlMGVhGwOoaQ==: 00:16:50.824 13:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2MwMjJhZjY4N2NmNmE3MmU0M2ZjNzNkMjg3MmRkNjQxM2ZkNzBhODcxNTkyOWEyQ4RbWA==: 00:16:50.824 13:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:50.824 13:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:50.824 13:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTE1MjhmOWM1MjcyZTUzOGVlZDhkMDJkYjMwOTY1NzI5OWZlMGM4MmEzNGFlMGVhGwOoaQ==: 00:16:50.824 13:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2MwMjJhZjY4N2NmNmE3MmU0M2ZjNzNkMjg3MmRkNjQxM2ZkNzBhODcxNTkyOWEyQ4RbWA==: ]] 00:16:50.824 13:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2MwMjJhZjY4N2NmNmE3MmU0M2ZjNzNkMjg3MmRkNjQxM2ZkNzBhODcxNTkyOWEyQ4RbWA==: 00:16:50.824 13:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:16:50.824 13:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:50.824 13:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:50.824 13:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:50.824 13:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:50.824 13:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:50.824 13:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:50.824 13:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.824 13:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.824 13:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.824 13:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:50.824 13:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:50.824 13:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:50.824 13:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:50.824 13:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:50.824 13:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:50.824 13:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:50.824 13:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:50.824 13:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:50.824 13:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:50.824 13:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:50.824 13:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.824 13:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.824 13:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.390 nvme0n1 00:16:51.390 13:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.390 13:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:51.390 13:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:51.390 13:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.390 13:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.390 13:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.650 13:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.650 13:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:51.650 13:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.650 13:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.650 13:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.650 13:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:51.650 13:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:16:51.651 13:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:51.651 13:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:51.651 13:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:51.651 13:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:51.651 13:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmI5MTYwZjIzOTc4NGQ3NmVmNGE1ZTVlZDdlNWFjNGQiiCma: 00:16:51.651 13:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTQzMmVlMTk1NTg0YzcwMjBjYzQ2Y2YxZDI5YWNlNTUCCDNS: 00:16:51.651 13:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:51.651 13:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:51.651 13:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmI5MTYwZjIzOTc4NGQ3NmVmNGE1ZTVlZDdlNWFjNGQiiCma: 00:16:51.651 13:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTQzMmVlMTk1NTg0YzcwMjBjYzQ2Y2YxZDI5YWNlNTUCCDNS: ]] 00:16:51.651 13:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTQzMmVlMTk1NTg0YzcwMjBjYzQ2Y2YxZDI5YWNlNTUCCDNS: 00:16:51.651 13:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:16:51.651 13:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:51.651 13:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:51.651 13:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:51.651 13:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:51.651 13:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:51.651 13:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:51.651 13:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.651 13:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.651 13:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.651 13:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:51.651 13:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:51.651 13:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:51.651 13:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:51.651 13:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:51.651 13:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:51.651 13:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:51.651 13:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:51.651 13:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:51.651 13:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:51.651 13:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:51.651 13:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:51.651 13:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.651 13:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.220 nvme0n1 00:16:52.220 13:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.220 13:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:52.220 13:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.220 13:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.220 13:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:52.220 13:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.220 13:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.220 13:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:52.220 13:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.220 13:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.220 13:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.220 13:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:52.220 13:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:16:52.220 13:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:52.220 13:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:52.220 13:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:52.220 13:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:52.220 13:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGFkNDM4YmRmNzkzMTYyMTU0MThhNTE4YWMyZWNjMzVmYTQ0ZTUzNGFhMWMyMjllsh5kQg==: 00:16:52.220 13:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTZiMzFhM2I2OWQ1YzNkODEyNTBjNGIwMjBiZjZhN2MnzkgE: 00:16:52.220 13:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:52.220 13:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:52.220 13:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGFkNDM4YmRmNzkzMTYyMTU0MThhNTE4YWMyZWNjMzVmYTQ0ZTUzNGFhMWMyMjllsh5kQg==: 00:16:52.220 13:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTZiMzFhM2I2OWQ1YzNkODEyNTBjNGIwMjBiZjZhN2MnzkgE: ]] 00:16:52.220 13:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTZiMzFhM2I2OWQ1YzNkODEyNTBjNGIwMjBiZjZhN2MnzkgE: 00:16:52.220 13:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:16:52.220 13:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:52.220 13:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:52.220 13:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:52.220 13:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:52.220 13:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:52.220 13:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:52.221 13:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.221 13:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.221 13:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.221 13:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:52.221 13:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:52.221 13:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:52.221 13:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:52.221 13:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:52.221 13:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:52.221 13:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:52.221 13:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:52.221 13:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:52.221 13:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:52.221 13:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:52.221 13:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:52.221 13:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.221 13:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.157 nvme0n1 00:16:53.158 13:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.158 13:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:53.158 13:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:53.158 13:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.158 13:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.158 13:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.158 13:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.158 13:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:53.158 13:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.158 13:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.158 13:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.158 13:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:53.158 13:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:16:53.158 13:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:53.158 13:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:53.158 13:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:53.158 13:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:53.158 13:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGRmOWZjNjRhNTkwYTMxNTUxM2QwZjJiNmVkOGRjYmUxZWRiZjM2MTIyZTM2Njk4MWQ0MzdkYThjMTBhMWE5M8Ml828=: 00:16:53.158 13:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:53.158 13:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:53.158 13:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:53.158 13:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGRmOWZjNjRhNTkwYTMxNTUxM2QwZjJiNmVkOGRjYmUxZWRiZjM2MTIyZTM2Njk4MWQ0MzdkYThjMTBhMWE5M8Ml828=: 00:16:53.158 13:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:53.158 13:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:16:53.158 13:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:53.158 13:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:53.158 13:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:53.158 13:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:53.158 13:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:53.158 13:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:53.158 13:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.158 13:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.158 13:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.158 13:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:53.158 13:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:53.158 13:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:53.158 13:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:53.158 13:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:53.158 13:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:53.158 13:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:53.158 13:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:53.158 13:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:53.158 13:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:53.158 13:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:53.158 13:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:53.158 13:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.158 13:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.727 nvme0n1 00:16:53.727 13:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.727 13:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:53.727 13:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:53.728 13:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.728 13:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.728 13:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.728 13:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.728 13:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:53.728 13:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.728 13:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.728 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.728 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:16:53.728 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:53.728 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:53.728 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:16:53.728 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:53.728 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:53.728 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:53.728 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:53.728 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDdlMDQyY2MyN2EyZmM0NjE2MDk2YzQ1NzAxYmE5ZjI2cdk4: 00:16:53.728 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjY4ZmJkNTIwNzhkYTUxMTZlZGIyMTljZTZiYzc1OTA4YTdjZjRjYmMxMjgwNTllY2E3ZGFmNTM4YzBhMmU0NLEDezg=: 00:16:53.728 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:53.728 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:53.728 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDdlMDQyY2MyN2EyZmM0NjE2MDk2YzQ1NzAxYmE5ZjI2cdk4: 00:16:53.728 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjY4ZmJkNTIwNzhkYTUxMTZlZGIyMTljZTZiYzc1OTA4YTdjZjRjYmMxMjgwNTllY2E3ZGFmNTM4YzBhMmU0NLEDezg=: ]] 00:16:53.728 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjY4ZmJkNTIwNzhkYTUxMTZlZGIyMTljZTZiYzc1OTA4YTdjZjRjYmMxMjgwNTllY2E3ZGFmNTM4YzBhMmU0NLEDezg=: 00:16:53.728 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:16:53.728 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:53.728 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:53.728 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:53.728 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:53.728 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:53.728 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:53.728 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.728 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.728 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.728 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:53.728 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:53.728 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:53.728 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:53.728 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:53.728 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:53.728 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:53.728 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:53.728 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:53.728 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:53.728 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:53.728 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.728 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.728 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.988 nvme0n1 00:16:53.988 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.988 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:53.988 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:53.988 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.988 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.988 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.988 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.988 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:53.988 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.988 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.988 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.988 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:53.988 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:16:53.988 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:53.988 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:53.988 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:53.988 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:53.988 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTE1MjhmOWM1MjcyZTUzOGVlZDhkMDJkYjMwOTY1NzI5OWZlMGM4MmEzNGFlMGVhGwOoaQ==: 00:16:53.988 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2MwMjJhZjY4N2NmNmE3MmU0M2ZjNzNkMjg3MmRkNjQxM2ZkNzBhODcxNTkyOWEyQ4RbWA==: 00:16:53.988 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:53.988 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:53.988 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTE1MjhmOWM1MjcyZTUzOGVlZDhkMDJkYjMwOTY1NzI5OWZlMGM4MmEzNGFlMGVhGwOoaQ==: 00:16:53.988 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2MwMjJhZjY4N2NmNmE3MmU0M2ZjNzNkMjg3MmRkNjQxM2ZkNzBhODcxNTkyOWEyQ4RbWA==: ]] 00:16:53.988 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2MwMjJhZjY4N2NmNmE3MmU0M2ZjNzNkMjg3MmRkNjQxM2ZkNzBhODcxNTkyOWEyQ4RbWA==: 00:16:53.988 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:16:53.988 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:53.988 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:53.988 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:53.988 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:53.988 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:53.988 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:53.988 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.988 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.988 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.988 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:53.988 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:53.988 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:53.988 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:53.988 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:53.988 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:53.988 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:53.988 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:53.988 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:53.988 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:53.988 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:53.988 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:53.988 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.988 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.988 nvme0n1 00:16:53.988 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.988 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:53.988 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:53.988 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.988 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.988 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.988 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.988 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:53.988 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.988 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.988 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.988 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:53.988 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:16:53.988 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:53.988 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:53.988 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:53.988 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:53.988 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmI5MTYwZjIzOTc4NGQ3NmVmNGE1ZTVlZDdlNWFjNGQiiCma: 00:16:53.989 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTQzMmVlMTk1NTg0YzcwMjBjYzQ2Y2YxZDI5YWNlNTUCCDNS: 00:16:53.989 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:53.989 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:53.989 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmI5MTYwZjIzOTc4NGQ3NmVmNGE1ZTVlZDdlNWFjNGQiiCma: 00:16:53.989 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTQzMmVlMTk1NTg0YzcwMjBjYzQ2Y2YxZDI5YWNlNTUCCDNS: ]] 00:16:53.989 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTQzMmVlMTk1NTg0YzcwMjBjYzQ2Y2YxZDI5YWNlNTUCCDNS: 00:16:53.989 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:16:53.989 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:53.989 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:53.989 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:53.989 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:53.989 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:53.989 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:53.989 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.989 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.989 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.248 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:54.248 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:54.248 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:54.248 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:54.248 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:54.248 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:54.248 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:54.248 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:54.248 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:54.248 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:54.248 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:54.248 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:54.248 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.248 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.248 nvme0n1 00:16:54.248 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.248 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:54.248 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.248 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:54.248 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.249 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.249 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.249 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:54.249 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.249 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.249 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.249 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:54.249 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:16:54.249 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:54.249 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:54.249 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:54.249 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:54.249 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGFkNDM4YmRmNzkzMTYyMTU0MThhNTE4YWMyZWNjMzVmYTQ0ZTUzNGFhMWMyMjllsh5kQg==: 00:16:54.249 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTZiMzFhM2I2OWQ1YzNkODEyNTBjNGIwMjBiZjZhN2MnzkgE: 00:16:54.249 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:54.249 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:54.249 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGFkNDM4YmRmNzkzMTYyMTU0MThhNTE4YWMyZWNjMzVmYTQ0ZTUzNGFhMWMyMjllsh5kQg==: 00:16:54.249 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTZiMzFhM2I2OWQ1YzNkODEyNTBjNGIwMjBiZjZhN2MnzkgE: ]] 00:16:54.249 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTZiMzFhM2I2OWQ1YzNkODEyNTBjNGIwMjBiZjZhN2MnzkgE: 00:16:54.249 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:16:54.249 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:54.249 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:54.249 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:54.249 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:54.249 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:54.249 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:54.249 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.249 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.249 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.249 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:54.249 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:54.249 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:54.249 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:54.249 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:54.249 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:54.249 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:54.249 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:54.249 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:54.249 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:54.249 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:54.249 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:54.249 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.249 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.508 nvme0n1 00:16:54.508 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.508 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:54.508 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:54.508 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.508 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.508 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.508 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.508 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:54.508 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.508 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.509 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.509 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:54.509 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:16:54.509 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:54.509 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:54.509 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:54.509 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:54.509 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGRmOWZjNjRhNTkwYTMxNTUxM2QwZjJiNmVkOGRjYmUxZWRiZjM2MTIyZTM2Njk4MWQ0MzdkYThjMTBhMWE5M8Ml828=: 00:16:54.509 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:54.509 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:54.509 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:54.509 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGRmOWZjNjRhNTkwYTMxNTUxM2QwZjJiNmVkOGRjYmUxZWRiZjM2MTIyZTM2Njk4MWQ0MzdkYThjMTBhMWE5M8Ml828=: 00:16:54.509 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:54.509 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:16:54.509 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:54.509 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:54.509 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:54.509 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:54.509 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:54.509 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:54.509 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.509 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.509 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.509 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:54.509 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:54.509 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:54.509 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:54.509 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:54.509 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:54.509 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:54.509 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:54.509 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:54.509 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:54.509 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:54.509 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:54.509 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.509 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.509 nvme0n1 00:16:54.509 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.509 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:54.509 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:54.509 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.509 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.509 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.769 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.769 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:54.769 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.769 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.769 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.769 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:54.769 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:54.769 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:16:54.769 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:54.769 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:54.769 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:54.769 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:54.769 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDdlMDQyY2MyN2EyZmM0NjE2MDk2YzQ1NzAxYmE5ZjI2cdk4: 00:16:54.769 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjY4ZmJkNTIwNzhkYTUxMTZlZGIyMTljZTZiYzc1OTA4YTdjZjRjYmMxMjgwNTllY2E3ZGFmNTM4YzBhMmU0NLEDezg=: 00:16:54.769 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:54.769 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:54.769 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDdlMDQyY2MyN2EyZmM0NjE2MDk2YzQ1NzAxYmE5ZjI2cdk4: 00:16:54.769 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjY4ZmJkNTIwNzhkYTUxMTZlZGIyMTljZTZiYzc1OTA4YTdjZjRjYmMxMjgwNTllY2E3ZGFmNTM4YzBhMmU0NLEDezg=: ]] 00:16:54.769 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjY4ZmJkNTIwNzhkYTUxMTZlZGIyMTljZTZiYzc1OTA4YTdjZjRjYmMxMjgwNTllY2E3ZGFmNTM4YzBhMmU0NLEDezg=: 00:16:54.769 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:16:54.769 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:54.769 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:54.769 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:54.769 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:54.769 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:54.769 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:54.769 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.769 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.769 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.769 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:54.769 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:54.769 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:54.769 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:54.769 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:54.769 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:54.769 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:54.769 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:54.769 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:54.769 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:54.769 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:54.769 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:54.769 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.769 13:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.769 nvme0n1 00:16:54.769 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.769 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:54.769 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.769 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:54.769 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.769 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.769 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.769 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:54.769 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.769 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.769 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.769 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:54.769 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:16:54.769 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:54.769 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:54.769 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:54.769 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:54.769 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTE1MjhmOWM1MjcyZTUzOGVlZDhkMDJkYjMwOTY1NzI5OWZlMGM4MmEzNGFlMGVhGwOoaQ==: 00:16:54.769 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2MwMjJhZjY4N2NmNmE3MmU0M2ZjNzNkMjg3MmRkNjQxM2ZkNzBhODcxNTkyOWEyQ4RbWA==: 00:16:54.769 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:54.769 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:54.769 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTE1MjhmOWM1MjcyZTUzOGVlZDhkMDJkYjMwOTY1NzI5OWZlMGM4MmEzNGFlMGVhGwOoaQ==: 00:16:54.769 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2MwMjJhZjY4N2NmNmE3MmU0M2ZjNzNkMjg3MmRkNjQxM2ZkNzBhODcxNTkyOWEyQ4RbWA==: ]] 00:16:54.769 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2MwMjJhZjY4N2NmNmE3MmU0M2ZjNzNkMjg3MmRkNjQxM2ZkNzBhODcxNTkyOWEyQ4RbWA==: 00:16:54.769 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:16:54.769 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:54.769 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:54.769 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:54.769 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:54.769 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:54.769 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:54.769 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.769 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.769 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.769 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:54.769 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:54.769 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:54.769 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:54.769 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:54.769 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:54.769 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:54.769 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:54.769 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:54.769 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:54.769 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:54.769 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:54.769 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.770 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.029 nvme0n1 00:16:55.029 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.029 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:55.029 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:55.029 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.029 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.029 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.029 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.029 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:55.029 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.029 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.029 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.029 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:55.029 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:16:55.029 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:55.029 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:55.029 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:55.029 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:55.029 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmI5MTYwZjIzOTc4NGQ3NmVmNGE1ZTVlZDdlNWFjNGQiiCma: 00:16:55.029 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTQzMmVlMTk1NTg0YzcwMjBjYzQ2Y2YxZDI5YWNlNTUCCDNS: 00:16:55.029 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:55.029 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:55.030 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmI5MTYwZjIzOTc4NGQ3NmVmNGE1ZTVlZDdlNWFjNGQiiCma: 00:16:55.030 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTQzMmVlMTk1NTg0YzcwMjBjYzQ2Y2YxZDI5YWNlNTUCCDNS: ]] 00:16:55.030 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTQzMmVlMTk1NTg0YzcwMjBjYzQ2Y2YxZDI5YWNlNTUCCDNS: 00:16:55.030 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:16:55.030 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:55.030 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:55.030 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:55.030 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:55.030 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:55.030 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:55.030 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.030 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.030 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.030 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:55.030 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:55.030 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:55.030 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:55.030 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:55.030 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:55.030 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:55.030 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:55.030 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:55.030 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:55.030 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:55.030 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:55.030 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.030 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.289 nvme0n1 00:16:55.289 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.289 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:55.289 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:55.289 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.289 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.289 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.289 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.289 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:55.289 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.289 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.289 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.289 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:55.289 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:16:55.289 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:55.289 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:55.289 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:55.289 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:55.290 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGFkNDM4YmRmNzkzMTYyMTU0MThhNTE4YWMyZWNjMzVmYTQ0ZTUzNGFhMWMyMjllsh5kQg==: 00:16:55.290 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTZiMzFhM2I2OWQ1YzNkODEyNTBjNGIwMjBiZjZhN2MnzkgE: 00:16:55.290 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:55.290 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:55.290 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGFkNDM4YmRmNzkzMTYyMTU0MThhNTE4YWMyZWNjMzVmYTQ0ZTUzNGFhMWMyMjllsh5kQg==: 00:16:55.290 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTZiMzFhM2I2OWQ1YzNkODEyNTBjNGIwMjBiZjZhN2MnzkgE: ]] 00:16:55.290 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTZiMzFhM2I2OWQ1YzNkODEyNTBjNGIwMjBiZjZhN2MnzkgE: 00:16:55.290 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:16:55.290 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:55.290 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:55.290 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:55.290 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:55.290 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:55.290 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:55.290 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.290 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.290 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.290 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:55.290 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:55.290 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:55.290 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:55.290 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:55.290 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:55.290 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:55.290 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:55.290 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:55.290 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:55.290 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:55.290 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:55.290 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.290 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.290 nvme0n1 00:16:55.290 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.290 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:55.290 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:55.290 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.290 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.549 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.549 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.549 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:55.549 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.549 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.549 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.549 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:55.549 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:16:55.549 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:55.549 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:55.549 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:55.549 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:55.549 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGRmOWZjNjRhNTkwYTMxNTUxM2QwZjJiNmVkOGRjYmUxZWRiZjM2MTIyZTM2Njk4MWQ0MzdkYThjMTBhMWE5M8Ml828=: 00:16:55.549 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:55.549 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:55.549 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:55.549 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGRmOWZjNjRhNTkwYTMxNTUxM2QwZjJiNmVkOGRjYmUxZWRiZjM2MTIyZTM2Njk4MWQ0MzdkYThjMTBhMWE5M8Ml828=: 00:16:55.549 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:55.549 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:16:55.549 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:55.549 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:55.549 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:55.549 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:55.549 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:55.549 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:55.549 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.549 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.549 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.549 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:55.549 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:55.549 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:55.549 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:55.549 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:55.549 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:55.550 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:55.550 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:55.550 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:55.550 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:55.550 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:55.550 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:55.550 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.550 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.550 nvme0n1 00:16:55.550 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.550 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:55.550 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.550 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.550 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:55.550 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.550 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.550 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:55.550 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.550 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.813 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.813 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:55.813 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:55.813 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:16:55.813 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:55.813 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:55.813 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:55.813 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:55.813 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDdlMDQyY2MyN2EyZmM0NjE2MDk2YzQ1NzAxYmE5ZjI2cdk4: 00:16:55.813 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjY4ZmJkNTIwNzhkYTUxMTZlZGIyMTljZTZiYzc1OTA4YTdjZjRjYmMxMjgwNTllY2E3ZGFmNTM4YzBhMmU0NLEDezg=: 00:16:55.813 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:55.813 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:55.813 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDdlMDQyY2MyN2EyZmM0NjE2MDk2YzQ1NzAxYmE5ZjI2cdk4: 00:16:55.813 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjY4ZmJkNTIwNzhkYTUxMTZlZGIyMTljZTZiYzc1OTA4YTdjZjRjYmMxMjgwNTllY2E3ZGFmNTM4YzBhMmU0NLEDezg=: ]] 00:16:55.813 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjY4ZmJkNTIwNzhkYTUxMTZlZGIyMTljZTZiYzc1OTA4YTdjZjRjYmMxMjgwNTllY2E3ZGFmNTM4YzBhMmU0NLEDezg=: 00:16:55.813 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:16:55.813 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:55.813 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:55.813 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:55.813 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:55.813 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:55.813 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:55.813 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.813 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.813 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.813 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:55.813 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:55.813 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:55.813 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:55.813 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:55.813 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:55.813 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:55.813 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:55.813 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:55.813 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:55.813 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:55.813 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.813 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.813 13:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.813 nvme0n1 00:16:55.813 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.813 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:55.813 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:55.813 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.813 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.813 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.073 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.073 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:56.073 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.073 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.073 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.073 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:56.073 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:16:56.073 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:56.073 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:56.073 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:56.073 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:56.073 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTE1MjhmOWM1MjcyZTUzOGVlZDhkMDJkYjMwOTY1NzI5OWZlMGM4MmEzNGFlMGVhGwOoaQ==: 00:16:56.073 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2MwMjJhZjY4N2NmNmE3MmU0M2ZjNzNkMjg3MmRkNjQxM2ZkNzBhODcxNTkyOWEyQ4RbWA==: 00:16:56.073 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:56.073 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:56.073 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTE1MjhmOWM1MjcyZTUzOGVlZDhkMDJkYjMwOTY1NzI5OWZlMGM4MmEzNGFlMGVhGwOoaQ==: 00:16:56.073 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2MwMjJhZjY4N2NmNmE3MmU0M2ZjNzNkMjg3MmRkNjQxM2ZkNzBhODcxNTkyOWEyQ4RbWA==: ]] 00:16:56.073 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2MwMjJhZjY4N2NmNmE3MmU0M2ZjNzNkMjg3MmRkNjQxM2ZkNzBhODcxNTkyOWEyQ4RbWA==: 00:16:56.073 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:16:56.073 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:56.073 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:56.073 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:56.073 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:56.073 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:56.073 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:56.073 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.073 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.073 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.073 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:56.073 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:56.073 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:56.073 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:56.073 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:56.073 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:56.073 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:56.073 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:56.073 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:56.073 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:56.073 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:56.073 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:56.073 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.073 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.073 nvme0n1 00:16:56.073 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.073 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:56.073 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:56.073 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.073 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.073 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.332 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.332 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:56.332 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.332 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.332 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.332 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:56.332 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:16:56.332 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:56.332 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:56.332 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:56.332 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:56.332 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmI5MTYwZjIzOTc4NGQ3NmVmNGE1ZTVlZDdlNWFjNGQiiCma: 00:16:56.332 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTQzMmVlMTk1NTg0YzcwMjBjYzQ2Y2YxZDI5YWNlNTUCCDNS: 00:16:56.332 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:56.332 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:56.332 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmI5MTYwZjIzOTc4NGQ3NmVmNGE1ZTVlZDdlNWFjNGQiiCma: 00:16:56.332 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTQzMmVlMTk1NTg0YzcwMjBjYzQ2Y2YxZDI5YWNlNTUCCDNS: ]] 00:16:56.332 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTQzMmVlMTk1NTg0YzcwMjBjYzQ2Y2YxZDI5YWNlNTUCCDNS: 00:16:56.332 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:16:56.332 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:56.332 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:56.332 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:56.332 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:56.332 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:56.332 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:56.332 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.332 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.332 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.332 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:56.332 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:56.332 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:56.332 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:56.332 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:56.332 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:56.332 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:56.332 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:56.332 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:56.332 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:56.332 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:56.332 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.332 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.333 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.591 nvme0n1 00:16:56.591 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.591 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:56.591 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:56.591 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.591 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.591 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.591 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.591 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:56.591 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.591 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.591 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.591 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:56.591 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:16:56.591 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:56.591 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:56.591 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:56.591 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:56.591 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGFkNDM4YmRmNzkzMTYyMTU0MThhNTE4YWMyZWNjMzVmYTQ0ZTUzNGFhMWMyMjllsh5kQg==: 00:16:56.591 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTZiMzFhM2I2OWQ1YzNkODEyNTBjNGIwMjBiZjZhN2MnzkgE: 00:16:56.591 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:56.591 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:56.591 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGFkNDM4YmRmNzkzMTYyMTU0MThhNTE4YWMyZWNjMzVmYTQ0ZTUzNGFhMWMyMjllsh5kQg==: 00:16:56.591 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTZiMzFhM2I2OWQ1YzNkODEyNTBjNGIwMjBiZjZhN2MnzkgE: ]] 00:16:56.592 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTZiMzFhM2I2OWQ1YzNkODEyNTBjNGIwMjBiZjZhN2MnzkgE: 00:16:56.592 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:16:56.592 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:56.592 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:56.592 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:56.592 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:56.592 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:56.592 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:56.592 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.592 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.592 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.592 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:56.592 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:56.592 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:56.592 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:56.592 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:56.592 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:56.592 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:56.592 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:56.592 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:56.592 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:56.592 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:56.592 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:56.592 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.592 13:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.851 nvme0n1 00:16:56.851 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.851 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:56.851 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.851 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:56.851 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.851 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.851 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.851 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:56.851 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.851 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.851 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.851 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:56.851 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:16:56.851 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:56.851 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:56.851 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:56.851 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:56.851 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGRmOWZjNjRhNTkwYTMxNTUxM2QwZjJiNmVkOGRjYmUxZWRiZjM2MTIyZTM2Njk4MWQ0MzdkYThjMTBhMWE5M8Ml828=: 00:16:56.851 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:56.851 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:56.851 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:56.851 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGRmOWZjNjRhNTkwYTMxNTUxM2QwZjJiNmVkOGRjYmUxZWRiZjM2MTIyZTM2Njk4MWQ0MzdkYThjMTBhMWE5M8Ml828=: 00:16:56.851 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:56.851 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:16:56.851 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:56.851 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:56.851 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:56.851 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:56.851 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:56.851 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:56.851 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.851 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.851 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.852 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:56.852 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:56.852 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:56.852 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:56.852 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:56.852 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:56.852 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:56.852 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:56.852 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:56.852 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:56.852 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:56.852 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:56.852 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.852 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.111 nvme0n1 00:16:57.111 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.111 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:57.111 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:57.111 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.111 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.111 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.111 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.111 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:57.111 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.111 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.111 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.111 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:57.111 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:57.111 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:16:57.111 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:57.111 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:57.111 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:57.111 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:57.111 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDdlMDQyY2MyN2EyZmM0NjE2MDk2YzQ1NzAxYmE5ZjI2cdk4: 00:16:57.111 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjY4ZmJkNTIwNzhkYTUxMTZlZGIyMTljZTZiYzc1OTA4YTdjZjRjYmMxMjgwNTllY2E3ZGFmNTM4YzBhMmU0NLEDezg=: 00:16:57.111 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:57.111 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:57.111 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDdlMDQyY2MyN2EyZmM0NjE2MDk2YzQ1NzAxYmE5ZjI2cdk4: 00:16:57.111 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjY4ZmJkNTIwNzhkYTUxMTZlZGIyMTljZTZiYzc1OTA4YTdjZjRjYmMxMjgwNTllY2E3ZGFmNTM4YzBhMmU0NLEDezg=: ]] 00:16:57.111 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjY4ZmJkNTIwNzhkYTUxMTZlZGIyMTljZTZiYzc1OTA4YTdjZjRjYmMxMjgwNTllY2E3ZGFmNTM4YzBhMmU0NLEDezg=: 00:16:57.111 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:16:57.111 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:57.112 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:57.112 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:57.112 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:57.112 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:57.112 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:57.112 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.112 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.112 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.112 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:57.112 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:57.112 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:57.112 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:57.112 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:57.112 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:57.112 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:57.112 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:57.112 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:57.112 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:57.112 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:57.112 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:57.112 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.112 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.680 nvme0n1 00:16:57.680 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.680 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:57.680 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:57.680 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.680 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.680 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.681 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.681 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:57.681 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.681 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.681 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.681 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:57.681 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:16:57.681 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:57.681 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:57.681 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:57.681 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:57.681 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTE1MjhmOWM1MjcyZTUzOGVlZDhkMDJkYjMwOTY1NzI5OWZlMGM4MmEzNGFlMGVhGwOoaQ==: 00:16:57.681 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2MwMjJhZjY4N2NmNmE3MmU0M2ZjNzNkMjg3MmRkNjQxM2ZkNzBhODcxNTkyOWEyQ4RbWA==: 00:16:57.681 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:57.681 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:57.681 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTE1MjhmOWM1MjcyZTUzOGVlZDhkMDJkYjMwOTY1NzI5OWZlMGM4MmEzNGFlMGVhGwOoaQ==: 00:16:57.681 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2MwMjJhZjY4N2NmNmE3MmU0M2ZjNzNkMjg3MmRkNjQxM2ZkNzBhODcxNTkyOWEyQ4RbWA==: ]] 00:16:57.681 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2MwMjJhZjY4N2NmNmE3MmU0M2ZjNzNkMjg3MmRkNjQxM2ZkNzBhODcxNTkyOWEyQ4RbWA==: 00:16:57.681 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:16:57.681 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:57.681 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:57.681 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:57.681 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:57.681 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:57.681 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:57.681 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.681 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.681 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.681 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:57.681 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:57.681 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:57.681 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:57.681 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:57.681 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:57.681 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:57.681 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:57.681 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:57.681 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:57.681 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:57.681 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.681 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.681 13:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.940 nvme0n1 00:16:57.940 13:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.940 13:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:57.940 13:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.940 13:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:57.940 13:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.940 13:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.940 13:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.940 13:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:57.940 13:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.940 13:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.940 13:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.941 13:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:57.941 13:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:16:57.941 13:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:57.941 13:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:57.941 13:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:57.941 13:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:57.941 13:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmI5MTYwZjIzOTc4NGQ3NmVmNGE1ZTVlZDdlNWFjNGQiiCma: 00:16:57.941 13:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTQzMmVlMTk1NTg0YzcwMjBjYzQ2Y2YxZDI5YWNlNTUCCDNS: 00:16:57.941 13:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:57.941 13:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:57.941 13:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmI5MTYwZjIzOTc4NGQ3NmVmNGE1ZTVlZDdlNWFjNGQiiCma: 00:16:57.941 13:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTQzMmVlMTk1NTg0YzcwMjBjYzQ2Y2YxZDI5YWNlNTUCCDNS: ]] 00:16:57.941 13:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTQzMmVlMTk1NTg0YzcwMjBjYzQ2Y2YxZDI5YWNlNTUCCDNS: 00:16:57.941 13:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:16:57.941 13:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:57.941 13:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:57.941 13:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:57.941 13:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:57.941 13:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:57.941 13:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:57.941 13:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.941 13:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.941 13:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.941 13:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:57.941 13:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:57.941 13:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:57.941 13:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:57.941 13:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:57.941 13:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:57.941 13:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:57.941 13:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:57.941 13:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:57.941 13:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:57.941 13:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:57.941 13:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:57.941 13:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.941 13:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.509 nvme0n1 00:16:58.509 13:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.509 13:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:58.509 13:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.509 13:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.509 13:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:58.509 13:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.509 13:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.509 13:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:58.509 13:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.509 13:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.509 13:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.509 13:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:58.509 13:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:16:58.509 13:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:58.509 13:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:58.509 13:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:58.509 13:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:58.510 13:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGFkNDM4YmRmNzkzMTYyMTU0MThhNTE4YWMyZWNjMzVmYTQ0ZTUzNGFhMWMyMjllsh5kQg==: 00:16:58.510 13:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTZiMzFhM2I2OWQ1YzNkODEyNTBjNGIwMjBiZjZhN2MnzkgE: 00:16:58.510 13:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:58.510 13:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:58.510 13:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGFkNDM4YmRmNzkzMTYyMTU0MThhNTE4YWMyZWNjMzVmYTQ0ZTUzNGFhMWMyMjllsh5kQg==: 00:16:58.510 13:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTZiMzFhM2I2OWQ1YzNkODEyNTBjNGIwMjBiZjZhN2MnzkgE: ]] 00:16:58.510 13:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTZiMzFhM2I2OWQ1YzNkODEyNTBjNGIwMjBiZjZhN2MnzkgE: 00:16:58.510 13:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:16:58.510 13:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:58.510 13:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:58.510 13:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:58.510 13:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:58.510 13:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:58.510 13:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:58.510 13:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.510 13:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.510 13:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.510 13:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:58.510 13:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:58.510 13:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:58.510 13:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:58.510 13:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:58.510 13:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:58.510 13:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:58.510 13:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:58.510 13:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:58.510 13:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:58.510 13:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:58.510 13:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:58.510 13:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.510 13:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.769 nvme0n1 00:16:58.769 13:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.769 13:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:58.769 13:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:58.769 13:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.769 13:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.769 13:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.028 13:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.028 13:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:59.028 13:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.028 13:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.028 13:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.028 13:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:59.028 13:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:16:59.028 13:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:59.028 13:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:59.028 13:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:59.028 13:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:59.028 13:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGRmOWZjNjRhNTkwYTMxNTUxM2QwZjJiNmVkOGRjYmUxZWRiZjM2MTIyZTM2Njk4MWQ0MzdkYThjMTBhMWE5M8Ml828=: 00:16:59.028 13:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:59.029 13:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:59.029 13:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:59.029 13:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGRmOWZjNjRhNTkwYTMxNTUxM2QwZjJiNmVkOGRjYmUxZWRiZjM2MTIyZTM2Njk4MWQ0MzdkYThjMTBhMWE5M8Ml828=: 00:16:59.029 13:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:59.029 13:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:16:59.029 13:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:59.029 13:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:59.029 13:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:59.029 13:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:59.029 13:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:59.029 13:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:59.029 13:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.029 13:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.029 13:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.029 13:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:59.029 13:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:59.029 13:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:59.029 13:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:59.029 13:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:59.029 13:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:59.029 13:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:59.029 13:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:59.029 13:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:59.029 13:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:59.029 13:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:59.029 13:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:59.029 13:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.029 13:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.288 nvme0n1 00:16:59.288 13:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.288 13:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:59.288 13:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.288 13:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.288 13:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:59.288 13:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.288 13:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.288 13:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:59.288 13:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.288 13:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.288 13:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.288 13:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:59.288 13:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:59.288 13:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:16:59.288 13:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:59.288 13:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:59.288 13:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:59.288 13:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:59.288 13:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDdlMDQyY2MyN2EyZmM0NjE2MDk2YzQ1NzAxYmE5ZjI2cdk4: 00:16:59.288 13:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjY4ZmJkNTIwNzhkYTUxMTZlZGIyMTljZTZiYzc1OTA4YTdjZjRjYmMxMjgwNTllY2E3ZGFmNTM4YzBhMmU0NLEDezg=: 00:16:59.288 13:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:59.288 13:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:59.288 13:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDdlMDQyY2MyN2EyZmM0NjE2MDk2YzQ1NzAxYmE5ZjI2cdk4: 00:16:59.288 13:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjY4ZmJkNTIwNzhkYTUxMTZlZGIyMTljZTZiYzc1OTA4YTdjZjRjYmMxMjgwNTllY2E3ZGFmNTM4YzBhMmU0NLEDezg=: ]] 00:16:59.288 13:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjY4ZmJkNTIwNzhkYTUxMTZlZGIyMTljZTZiYzc1OTA4YTdjZjRjYmMxMjgwNTllY2E3ZGFmNTM4YzBhMmU0NLEDezg=: 00:16:59.288 13:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:16:59.288 13:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:59.288 13:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:59.288 13:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:59.288 13:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:59.288 13:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:59.289 13:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:59.289 13:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.289 13:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.289 13:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.289 13:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:59.289 13:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:59.289 13:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:59.289 13:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:59.289 13:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:59.289 13:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:59.289 13:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:59.289 13:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:59.289 13:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:59.289 13:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:59.289 13:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:59.289 13:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:59.289 13:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.289 13:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.225 nvme0n1 00:17:00.225 13:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.225 13:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:00.225 13:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:00.225 13:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.225 13:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.225 13:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.225 13:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.225 13:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:00.225 13:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.225 13:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.225 13:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.225 13:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:00.225 13:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:17:00.225 13:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:00.225 13:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:00.225 13:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:00.225 13:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:00.225 13:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTE1MjhmOWM1MjcyZTUzOGVlZDhkMDJkYjMwOTY1NzI5OWZlMGM4MmEzNGFlMGVhGwOoaQ==: 00:17:00.225 13:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2MwMjJhZjY4N2NmNmE3MmU0M2ZjNzNkMjg3MmRkNjQxM2ZkNzBhODcxNTkyOWEyQ4RbWA==: 00:17:00.225 13:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:00.225 13:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:00.225 13:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTE1MjhmOWM1MjcyZTUzOGVlZDhkMDJkYjMwOTY1NzI5OWZlMGM4MmEzNGFlMGVhGwOoaQ==: 00:17:00.225 13:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2MwMjJhZjY4N2NmNmE3MmU0M2ZjNzNkMjg3MmRkNjQxM2ZkNzBhODcxNTkyOWEyQ4RbWA==: ]] 00:17:00.225 13:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2MwMjJhZjY4N2NmNmE3MmU0M2ZjNzNkMjg3MmRkNjQxM2ZkNzBhODcxNTkyOWEyQ4RbWA==: 00:17:00.225 13:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:17:00.225 13:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:00.225 13:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:00.225 13:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:00.225 13:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:00.225 13:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:00.225 13:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:00.225 13:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.225 13:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.225 13:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.225 13:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:00.225 13:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:00.225 13:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:00.225 13:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:00.225 13:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:00.225 13:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:00.225 13:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:00.225 13:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:00.225 13:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:00.225 13:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:00.225 13:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:00.225 13:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:00.225 13:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.225 13:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.792 nvme0n1 00:17:00.792 13:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.792 13:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:00.792 13:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.792 13:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:00.792 13:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.792 13:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.792 13:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.792 13:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:00.792 13:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.792 13:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.792 13:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.792 13:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:00.792 13:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:17:00.792 13:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:00.792 13:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:00.792 13:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:00.792 13:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:00.793 13:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmI5MTYwZjIzOTc4NGQ3NmVmNGE1ZTVlZDdlNWFjNGQiiCma: 00:17:00.793 13:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTQzMmVlMTk1NTg0YzcwMjBjYzQ2Y2YxZDI5YWNlNTUCCDNS: 00:17:00.793 13:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:00.793 13:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:00.793 13:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmI5MTYwZjIzOTc4NGQ3NmVmNGE1ZTVlZDdlNWFjNGQiiCma: 00:17:00.793 13:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTQzMmVlMTk1NTg0YzcwMjBjYzQ2Y2YxZDI5YWNlNTUCCDNS: ]] 00:17:00.793 13:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTQzMmVlMTk1NTg0YzcwMjBjYzQ2Y2YxZDI5YWNlNTUCCDNS: 00:17:00.793 13:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:17:00.793 13:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:00.793 13:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:00.793 13:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:00.793 13:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:00.793 13:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:00.793 13:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:00.793 13:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.793 13:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.793 13:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.793 13:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:00.793 13:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:00.793 13:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:00.793 13:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:00.793 13:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:00.793 13:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:00.793 13:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:00.793 13:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:00.793 13:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:00.793 13:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:00.793 13:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:00.793 13:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:00.793 13:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.793 13:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.362 nvme0n1 00:17:01.362 13:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.362 13:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:01.362 13:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:01.362 13:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.362 13:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.362 13:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.362 13:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.362 13:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:01.362 13:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.362 13:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.362 13:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.362 13:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:01.362 13:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:17:01.362 13:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:01.362 13:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:01.362 13:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:01.362 13:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:01.362 13:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGFkNDM4YmRmNzkzMTYyMTU0MThhNTE4YWMyZWNjMzVmYTQ0ZTUzNGFhMWMyMjllsh5kQg==: 00:17:01.362 13:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTZiMzFhM2I2OWQ1YzNkODEyNTBjNGIwMjBiZjZhN2MnzkgE: 00:17:01.362 13:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:01.362 13:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:01.362 13:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGFkNDM4YmRmNzkzMTYyMTU0MThhNTE4YWMyZWNjMzVmYTQ0ZTUzNGFhMWMyMjllsh5kQg==: 00:17:01.362 13:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTZiMzFhM2I2OWQ1YzNkODEyNTBjNGIwMjBiZjZhN2MnzkgE: ]] 00:17:01.362 13:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTZiMzFhM2I2OWQ1YzNkODEyNTBjNGIwMjBiZjZhN2MnzkgE: 00:17:01.362 13:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:17:01.362 13:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:01.362 13:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:01.362 13:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:01.362 13:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:01.362 13:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:01.362 13:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:01.362 13:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.362 13:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.362 13:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.362 13:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:01.362 13:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:01.362 13:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:01.362 13:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:01.362 13:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:01.362 13:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:01.362 13:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:01.362 13:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:01.362 13:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:01.362 13:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:01.362 13:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:01.362 13:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:01.362 13:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.362 13:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.929 nvme0n1 00:17:01.930 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.930 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:01.930 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:01.930 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.930 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.930 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.930 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.930 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:01.930 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.930 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.930 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.930 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:01.930 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:17:01.930 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:01.930 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:01.930 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:01.930 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:01.930 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGRmOWZjNjRhNTkwYTMxNTUxM2QwZjJiNmVkOGRjYmUxZWRiZjM2MTIyZTM2Njk4MWQ0MzdkYThjMTBhMWE5M8Ml828=: 00:17:01.930 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:01.930 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:01.930 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:01.930 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGRmOWZjNjRhNTkwYTMxNTUxM2QwZjJiNmVkOGRjYmUxZWRiZjM2MTIyZTM2Njk4MWQ0MzdkYThjMTBhMWE5M8Ml828=: 00:17:01.930 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:01.930 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:17:01.930 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:01.930 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:01.930 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:01.930 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:01.930 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:01.930 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:01.930 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.930 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.930 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.930 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:01.930 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:01.930 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:01.930 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:01.930 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:01.930 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:01.930 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:01.930 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:01.930 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:01.930 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:01.930 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:01.930 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:01.930 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.930 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.497 nvme0n1 00:17:02.497 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.497 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:02.497 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.497 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.497 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:02.497 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.497 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.497 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:02.497 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.497 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.497 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.497 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:02.497 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:02.497 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:02.497 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:02.497 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:02.497 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTE1MjhmOWM1MjcyZTUzOGVlZDhkMDJkYjMwOTY1NzI5OWZlMGM4MmEzNGFlMGVhGwOoaQ==: 00:17:02.497 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2MwMjJhZjY4N2NmNmE3MmU0M2ZjNzNkMjg3MmRkNjQxM2ZkNzBhODcxNTkyOWEyQ4RbWA==: 00:17:02.497 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:02.497 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:02.497 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTE1MjhmOWM1MjcyZTUzOGVlZDhkMDJkYjMwOTY1NzI5OWZlMGM4MmEzNGFlMGVhGwOoaQ==: 00:17:02.497 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2MwMjJhZjY4N2NmNmE3MmU0M2ZjNzNkMjg3MmRkNjQxM2ZkNzBhODcxNTkyOWEyQ4RbWA==: ]] 00:17:02.497 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2MwMjJhZjY4N2NmNmE3MmU0M2ZjNzNkMjg3MmRkNjQxM2ZkNzBhODcxNTkyOWEyQ4RbWA==: 00:17:02.497 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:02.497 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.497 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.497 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.497 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:17:02.497 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:02.497 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:02.497 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:02.497 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:02.497 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:02.497 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:02.497 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:02.497 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:02.497 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:02.497 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:02.497 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:17:02.497 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:17:02.497 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:17:02.497 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:02.497 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:02.497 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:02.497 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:02.497 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:17:02.497 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.497 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.497 request: 00:17:02.497 { 00:17:02.497 "name": "nvme0", 00:17:02.497 "trtype": "tcp", 00:17:02.497 "traddr": "10.0.0.1", 00:17:02.497 "adrfam": "ipv4", 00:17:02.497 "trsvcid": "4420", 00:17:02.497 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:17:02.497 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:17:02.497 "prchk_reftag": false, 00:17:02.497 "prchk_guard": false, 00:17:02.497 "hdgst": false, 00:17:02.497 "ddgst": false, 00:17:02.497 "allow_unrecognized_csi": false, 00:17:02.497 "method": "bdev_nvme_attach_controller", 00:17:02.497 "req_id": 1 00:17:02.497 } 00:17:02.497 Got JSON-RPC error response 00:17:02.497 response: 00:17:02.497 { 00:17:02.497 "code": -5, 00:17:02.497 "message": "Input/output error" 00:17:02.497 } 00:17:02.497 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:02.497 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:17:02.497 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:02.497 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:02.497 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:02.497 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:17:02.497 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.497 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:17:02.497 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.756 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.756 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:17:02.756 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:17:02.756 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:02.756 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:02.756 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:02.756 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:02.756 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:02.756 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:02.756 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:02.756 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:02.756 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:02.756 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:02.756 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:02.756 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:17:02.756 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:02.756 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:02.756 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:02.756 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:02.756 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:02.756 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:02.756 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.756 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.756 request: 00:17:02.756 { 00:17:02.756 "name": "nvme0", 00:17:02.756 "trtype": "tcp", 00:17:02.756 "traddr": "10.0.0.1", 00:17:02.756 "adrfam": "ipv4", 00:17:02.756 "trsvcid": "4420", 00:17:02.756 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:17:02.756 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:17:02.756 "prchk_reftag": false, 00:17:02.756 "prchk_guard": false, 00:17:02.756 "hdgst": false, 00:17:02.756 "ddgst": false, 00:17:02.756 "dhchap_key": "key2", 00:17:02.756 "allow_unrecognized_csi": false, 00:17:02.756 "method": "bdev_nvme_attach_controller", 00:17:02.756 "req_id": 1 00:17:02.756 } 00:17:02.756 Got JSON-RPC error response 00:17:02.756 response: 00:17:02.756 { 00:17:02.756 "code": -5, 00:17:02.756 "message": "Input/output error" 00:17:02.756 } 00:17:02.756 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:02.756 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:17:02.756 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:02.756 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:02.756 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:02.756 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:17:02.756 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:17:02.756 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.756 13:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.756 13:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.756 13:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:17:02.756 13:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:17:02.756 13:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:02.756 13:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:02.756 13:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:02.756 13:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:02.756 13:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:02.756 13:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:02.756 13:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:02.756 13:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:02.756 13:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:02.756 13:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:02.756 13:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:02.756 13:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:17:02.756 13:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:02.756 13:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:02.756 13:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:02.756 13:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:02.756 13:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:02.756 13:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:02.756 13:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.756 13:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.756 request: 00:17:02.756 { 00:17:02.756 "name": "nvme0", 00:17:02.756 "trtype": "tcp", 00:17:02.756 "traddr": "10.0.0.1", 00:17:02.756 "adrfam": "ipv4", 00:17:02.756 "trsvcid": "4420", 00:17:02.756 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:17:02.756 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:17:02.756 "prchk_reftag": false, 00:17:02.756 "prchk_guard": false, 00:17:02.756 "hdgst": false, 00:17:02.756 "ddgst": false, 00:17:02.756 "dhchap_key": "key1", 00:17:02.756 "dhchap_ctrlr_key": "ckey2", 00:17:02.756 "allow_unrecognized_csi": false, 00:17:02.756 "method": "bdev_nvme_attach_controller", 00:17:02.756 "req_id": 1 00:17:02.756 } 00:17:02.756 Got JSON-RPC error response 00:17:02.756 response: 00:17:02.756 { 00:17:02.756 "code": -5, 00:17:02.756 "message": "Input/output error" 00:17:02.756 } 00:17:02.757 13:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:02.757 13:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:17:02.757 13:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:02.757 13:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:02.757 13:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:02.757 13:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:17:02.757 13:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:02.757 13:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:02.757 13:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:02.757 13:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:02.757 13:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:02.757 13:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:02.757 13:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:02.757 13:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:02.757 13:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:02.757 13:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:02.757 13:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:02.757 13:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.757 13:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.015 nvme0n1 00:17:03.015 13:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.015 13:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:17:03.015 13:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:03.015 13:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:03.015 13:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:03.015 13:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:03.015 13:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmI5MTYwZjIzOTc4NGQ3NmVmNGE1ZTVlZDdlNWFjNGQiiCma: 00:17:03.015 13:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTQzMmVlMTk1NTg0YzcwMjBjYzQ2Y2YxZDI5YWNlNTUCCDNS: 00:17:03.015 13:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:03.015 13:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:03.015 13:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmI5MTYwZjIzOTc4NGQ3NmVmNGE1ZTVlZDdlNWFjNGQiiCma: 00:17:03.015 13:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTQzMmVlMTk1NTg0YzcwMjBjYzQ2Y2YxZDI5YWNlNTUCCDNS: ]] 00:17:03.015 13:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTQzMmVlMTk1NTg0YzcwMjBjYzQ2Y2YxZDI5YWNlNTUCCDNS: 00:17:03.015 13:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.015 13:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.015 13:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.015 13:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.015 13:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:17:03.015 13:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.015 13:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:17:03.015 13:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.015 13:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.015 13:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.015 13:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:03.015 13:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:17:03.015 13:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:03.015 13:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:03.015 13:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:03.015 13:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:03.015 13:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:03.015 13:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:03.015 13:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.015 13:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.015 request: 00:17:03.015 { 00:17:03.015 "name": "nvme0", 00:17:03.015 "dhchap_key": "key1", 00:17:03.015 "dhchap_ctrlr_key": "ckey2", 00:17:03.015 "method": "bdev_nvme_set_keys", 00:17:03.015 "req_id": 1 00:17:03.015 } 00:17:03.015 Got JSON-RPC error response 00:17:03.015 response: 00:17:03.015 { 00:17:03.015 "code": -13, 00:17:03.015 "message": "Permission denied" 00:17:03.015 } 00:17:03.015 13:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:03.015 13:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:17:03.015 13:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:03.015 13:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:03.015 13:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:03.015 13:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:17:03.015 13:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.015 13:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.015 13:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:17:03.015 13:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.015 13:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:17:03.015 13:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:17:04.392 13:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:17:04.392 13:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.392 13:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.392 13:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:17:04.392 13:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.392 13:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:17:04.392 13:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:04.392 13:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:04.392 13:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:04.392 13:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:04.392 13:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:04.392 13:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTE1MjhmOWM1MjcyZTUzOGVlZDhkMDJkYjMwOTY1NzI5OWZlMGM4MmEzNGFlMGVhGwOoaQ==: 00:17:04.392 13:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2MwMjJhZjY4N2NmNmE3MmU0M2ZjNzNkMjg3MmRkNjQxM2ZkNzBhODcxNTkyOWEyQ4RbWA==: 00:17:04.392 13:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:04.392 13:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:04.392 13:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTE1MjhmOWM1MjcyZTUzOGVlZDhkMDJkYjMwOTY1NzI5OWZlMGM4MmEzNGFlMGVhGwOoaQ==: 00:17:04.392 13:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2MwMjJhZjY4N2NmNmE3MmU0M2ZjNzNkMjg3MmRkNjQxM2ZkNzBhODcxNTkyOWEyQ4RbWA==: ]] 00:17:04.392 13:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2MwMjJhZjY4N2NmNmE3MmU0M2ZjNzNkMjg3MmRkNjQxM2ZkNzBhODcxNTkyOWEyQ4RbWA==: 00:17:04.392 13:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:17:04.392 13:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:04.392 13:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:04.392 13:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:04.392 13:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:04.392 13:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:04.392 13:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:04.392 13:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:04.392 13:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:04.392 13:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:04.392 13:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:04.392 13:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:04.392 13:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.392 13:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.392 nvme0n1 00:17:04.392 13:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.392 13:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:17:04.392 13:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:04.392 13:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:04.392 13:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:04.392 13:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:04.392 13:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmI5MTYwZjIzOTc4NGQ3NmVmNGE1ZTVlZDdlNWFjNGQiiCma: 00:17:04.392 13:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTQzMmVlMTk1NTg0YzcwMjBjYzQ2Y2YxZDI5YWNlNTUCCDNS: 00:17:04.392 13:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:04.392 13:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:04.392 13:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmI5MTYwZjIzOTc4NGQ3NmVmNGE1ZTVlZDdlNWFjNGQiiCma: 00:17:04.392 13:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTQzMmVlMTk1NTg0YzcwMjBjYzQ2Y2YxZDI5YWNlNTUCCDNS: ]] 00:17:04.392 13:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTQzMmVlMTk1NTg0YzcwMjBjYzQ2Y2YxZDI5YWNlNTUCCDNS: 00:17:04.392 13:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:17:04.392 13:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:17:04.392 13:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:17:04.392 13:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:04.392 13:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:04.392 13:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:04.392 13:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:04.392 13:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:17:04.392 13:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.392 13:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.392 request: 00:17:04.392 { 00:17:04.392 "name": "nvme0", 00:17:04.392 "dhchap_key": "key2", 00:17:04.392 "dhchap_ctrlr_key": "ckey1", 00:17:04.392 "method": "bdev_nvme_set_keys", 00:17:04.392 "req_id": 1 00:17:04.392 } 00:17:04.392 Got JSON-RPC error response 00:17:04.393 response: 00:17:04.393 { 00:17:04.393 "code": -13, 00:17:04.393 "message": "Permission denied" 00:17:04.393 } 00:17:04.393 13:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:04.393 13:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:17:04.393 13:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:04.393 13:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:04.393 13:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:04.393 13:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:17:04.393 13:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:17:04.393 13:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.393 13:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.393 13:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.393 13:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:17:04.393 13:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:17:05.330 13:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:17:05.330 13:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:17:05.330 13:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.330 13:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.330 13:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.330 13:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:17:05.330 13:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:17:05.330 13:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:17:05.330 13:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:17:05.330 13:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:05.330 13:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:17:05.330 13:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:05.330 13:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:17:05.330 13:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:05.330 13:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:05.330 rmmod nvme_tcp 00:17:05.330 rmmod nvme_fabrics 00:17:05.330 13:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:05.589 13:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:17:05.589 13:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:17:05.589 13:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 78246 ']' 00:17:05.589 13:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 78246 00:17:05.589 13:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 78246 ']' 00:17:05.589 13:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 78246 00:17:05.589 13:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:17:05.589 13:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:05.590 13:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78246 00:17:05.590 killing process with pid 78246 00:17:05.590 13:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:05.590 13:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:05.590 13:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78246' 00:17:05.590 13:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 78246 00:17:05.590 13:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 78246 00:17:05.590 13:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:05.590 13:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:05.590 13:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:05.590 13:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:17:05.590 13:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:17:05.590 13:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:17:05.590 13:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:05.590 13:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:05.590 13:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:05.590 13:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:05.590 13:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:05.590 13:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:05.590 13:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:05.859 13:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:05.859 13:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:05.859 13:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:05.859 13:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:05.859 13:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:05.859 13:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:05.859 13:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:05.859 13:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:05.859 13:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:05.859 13:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:05.859 13:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:05.859 13:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:05.859 13:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:05.859 13:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@300 -- # return 0 00:17:05.859 13:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:17:05.859 13:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:17:05.859 13:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:17:05.859 13:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:17:05.859 13:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:17:05.859 13:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:05.859 13:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:17:05.859 13:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:17:05.859 13:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:05.859 13:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:17:05.859 13:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:17:05.859 13:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:06.522 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:06.781 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:17:06.781 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:17:06.781 13:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.fLE /tmp/spdk.key-null.rqs /tmp/spdk.key-sha256.F2F /tmp/spdk.key-sha384.Ec7 /tmp/spdk.key-sha512.haX /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:17:06.781 13:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:07.351 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:07.351 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:07.351 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:07.351 00:17:07.351 real 0m36.945s 00:17:07.351 user 0m33.724s 00:17:07.351 sys 0m3.889s 00:17:07.351 13:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:07.351 13:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.351 ************************************ 00:17:07.351 END TEST nvmf_auth_host 00:17:07.351 ************************************ 00:17:07.351 13:56:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:17:07.351 13:56:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:17:07.351 13:56:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:07.351 13:56:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:07.351 13:56:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.351 ************************************ 00:17:07.351 START TEST nvmf_digest 00:17:07.351 ************************************ 00:17:07.351 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:17:07.351 * Looking for test storage... 00:17:07.351 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:07.351 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:07.351 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 00:17:07.351 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:07.611 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:07.611 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:07.611 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:07.611 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:07.611 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:17:07.611 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:17:07.611 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:17:07.611 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:17:07.611 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:17:07.611 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:17:07.611 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:17:07.611 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:07.611 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:17:07.611 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:17:07.611 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:07.611 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:07.612 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:17:07.612 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:17:07.612 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:07.612 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:17:07.612 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:17:07.612 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:17:07.612 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:17:07.612 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:07.612 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:17:07.612 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:17:07.612 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:07.612 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:07.612 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:17:07.612 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:07.612 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:07.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:07.612 --rc genhtml_branch_coverage=1 00:17:07.612 --rc genhtml_function_coverage=1 00:17:07.612 --rc genhtml_legend=1 00:17:07.612 --rc geninfo_all_blocks=1 00:17:07.612 --rc geninfo_unexecuted_blocks=1 00:17:07.612 00:17:07.612 ' 00:17:07.612 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:07.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:07.612 --rc genhtml_branch_coverage=1 00:17:07.612 --rc genhtml_function_coverage=1 00:17:07.612 --rc genhtml_legend=1 00:17:07.612 --rc geninfo_all_blocks=1 00:17:07.612 --rc geninfo_unexecuted_blocks=1 00:17:07.612 00:17:07.612 ' 00:17:07.612 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:07.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:07.612 --rc genhtml_branch_coverage=1 00:17:07.612 --rc genhtml_function_coverage=1 00:17:07.612 --rc genhtml_legend=1 00:17:07.612 --rc geninfo_all_blocks=1 00:17:07.612 --rc geninfo_unexecuted_blocks=1 00:17:07.612 00:17:07.612 ' 00:17:07.612 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:07.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:07.612 --rc genhtml_branch_coverage=1 00:17:07.612 --rc genhtml_function_coverage=1 00:17:07.612 --rc genhtml_legend=1 00:17:07.612 --rc geninfo_all_blocks=1 00:17:07.612 --rc geninfo_unexecuted_blocks=1 00:17:07.612 00:17:07.612 ' 00:17:07.612 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:07.612 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:17:07.612 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:07.612 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:07.612 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:07.612 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:07.612 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:07.612 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:07.612 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:07.612 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:07.612 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:07.612 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:07.612 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:17:07.612 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=cfa2def7-c8af-457f-82a0-b312efdea7f4 00:17:07.612 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:07.612 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:07.612 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:07.612 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:07.612 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:07.612 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:17:07.612 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:07.612 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:07.612 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:07.612 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.612 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.612 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.612 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:17:07.612 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.612 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:17:07.612 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:07.612 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:07.612 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:07.612 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:07.612 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:07.612 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:07.612 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:07.612 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:07.612 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:07.612 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:07.612 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:17:07.612 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:17:07.612 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:17:07.612 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:17:07.612 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:17:07.612 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:07.612 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:07.612 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:07.612 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:07.612 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:07.612 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:07.612 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:07.612 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:07.612 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:07.612 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:07.612 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:07.612 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:07.612 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:07.612 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:07.613 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:07.613 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:07.613 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:07.613 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:07.613 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:07.613 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:07.613 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:07.613 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:07.613 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:07.613 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:07.613 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:07.613 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:07.613 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:07.613 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:07.613 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:07.613 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:07.613 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:07.613 Cannot find device "nvmf_init_br" 00:17:07.613 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:17:07.613 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:07.613 Cannot find device "nvmf_init_br2" 00:17:07.613 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:17:07.613 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:07.613 Cannot find device "nvmf_tgt_br" 00:17:07.613 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # true 00:17:07.613 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:07.613 Cannot find device "nvmf_tgt_br2" 00:17:07.613 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # true 00:17:07.613 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:07.613 Cannot find device "nvmf_init_br" 00:17:07.613 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # true 00:17:07.613 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:07.613 Cannot find device "nvmf_init_br2" 00:17:07.613 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # true 00:17:07.613 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:07.613 Cannot find device "nvmf_tgt_br" 00:17:07.613 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # true 00:17:07.613 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:07.613 Cannot find device "nvmf_tgt_br2" 00:17:07.613 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # true 00:17:07.613 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:07.613 Cannot find device "nvmf_br" 00:17:07.613 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # true 00:17:07.613 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:07.613 Cannot find device "nvmf_init_if" 00:17:07.613 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # true 00:17:07.613 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:07.613 Cannot find device "nvmf_init_if2" 00:17:07.613 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # true 00:17:07.613 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:07.613 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:07.613 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # true 00:17:07.613 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:07.613 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:07.613 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # true 00:17:07.613 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:07.613 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:07.613 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:07.613 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:07.613 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:07.613 13:56:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:07.873 13:56:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:07.873 13:56:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:07.873 13:56:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:07.873 13:56:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:07.873 13:56:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:07.873 13:56:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:07.873 13:56:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:07.873 13:56:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:07.873 13:56:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:07.873 13:56:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:07.873 13:56:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:07.873 13:56:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:07.873 13:56:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:07.873 13:56:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:07.873 13:56:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:07.873 13:56:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:07.873 13:56:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:07.873 13:56:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:07.873 13:56:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:07.873 13:56:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:07.873 13:56:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:07.873 13:56:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:07.873 13:56:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:07.873 13:56:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:07.873 13:56:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:07.873 13:56:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:07.873 13:56:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:07.873 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:07.873 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:17:07.873 00:17:07.873 --- 10.0.0.3 ping statistics --- 00:17:07.873 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:07.873 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:17:07.873 13:56:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:07.873 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:07.873 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:17:07.873 00:17:07.873 --- 10.0.0.4 ping statistics --- 00:17:07.873 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:07.873 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:17:07.873 13:56:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:07.873 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:07.873 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:17:07.873 00:17:07.873 --- 10.0.0.1 ping statistics --- 00:17:07.873 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:07.873 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:17:07.873 13:56:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:07.873 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:07.873 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:17:07.873 00:17:07.873 --- 10.0.0.2 ping statistics --- 00:17:07.873 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:07.873 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:17:07.873 13:56:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:07.873 13:56:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@461 -- # return 0 00:17:07.873 13:56:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:07.873 13:56:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:07.873 13:56:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:07.873 13:56:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:07.873 13:56:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:07.873 13:56:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:07.873 13:56:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:07.873 13:56:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:07.873 13:56:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:17:07.873 13:56:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:17:07.873 13:56:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:07.873 13:56:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:07.873 13:56:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:17:07.873 ************************************ 00:17:07.873 START TEST nvmf_digest_clean 00:17:07.873 ************************************ 00:17:07.873 13:56:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:17:07.873 13:56:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:17:07.873 13:56:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:17:07.873 13:56:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:17:07.873 13:56:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:17:07.873 13:56:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:17:07.873 13:56:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:07.873 13:56:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:07.873 13:56:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:07.873 13:56:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=79901 00:17:07.873 13:56:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 79901 00:17:07.873 13:56:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 79901 ']' 00:17:07.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:07.873 13:56:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:17:07.873 13:56:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:07.873 13:56:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:07.873 13:56:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:07.873 13:56:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:07.873 13:56:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:08.132 [2024-12-06 13:56:07.302765] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:17:08.132 [2024-12-06 13:56:07.302849] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:08.132 [2024-12-06 13:56:07.459273] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:08.132 [2024-12-06 13:56:07.513357] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:08.132 [2024-12-06 13:56:07.513420] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:08.132 [2024-12-06 13:56:07.513436] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:08.132 [2024-12-06 13:56:07.513447] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:08.132 [2024-12-06 13:56:07.513457] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:08.132 [2024-12-06 13:56:07.513918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:08.391 13:56:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:08.391 13:56:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:17:08.391 13:56:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:08.391 13:56:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:08.391 13:56:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:08.391 13:56:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:08.391 13:56:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:17:08.391 13:56:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:17:08.391 13:56:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:17:08.391 13:56:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.391 13:56:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:08.391 [2024-12-06 13:56:07.670045] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:08.391 null0 00:17:08.391 [2024-12-06 13:56:07.735949] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:08.391 [2024-12-06 13:56:07.760143] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:08.391 13:56:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.391 13:56:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:17:08.391 13:56:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:08.391 13:56:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:08.391 13:56:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:17:08.392 13:56:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:17:08.392 13:56:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:17:08.392 13:56:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:08.392 13:56:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79925 00:17:08.392 13:56:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:17:08.392 13:56:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79925 /var/tmp/bperf.sock 00:17:08.392 13:56:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 79925 ']' 00:17:08.392 13:56:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:08.392 13:56:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:08.392 13:56:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:08.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:08.392 13:56:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:08.392 13:56:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:08.651 [2024-12-06 13:56:07.826056] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:17:08.651 [2024-12-06 13:56:07.826191] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79925 ] 00:17:08.651 [2024-12-06 13:56:07.978720] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:08.651 [2024-12-06 13:56:08.042138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:08.911 13:56:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:08.911 13:56:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:17:08.911 13:56:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:08.911 13:56:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:08.911 13:56:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:09.171 [2024-12-06 13:56:08.510855] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:09.430 13:56:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:09.430 13:56:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:09.690 nvme0n1 00:17:09.690 13:56:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:09.690 13:56:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:09.690 Running I/O for 2 seconds... 00:17:12.005 15113.00 IOPS, 59.04 MiB/s [2024-12-06T13:56:11.409Z] 16192.50 IOPS, 63.25 MiB/s 00:17:12.005 Latency(us) 00:17:12.005 [2024-12-06T13:56:11.409Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:12.005 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:17:12.005 nvme0n1 : 2.01 16200.37 63.28 0.00 0.00 7894.72 6583.39 28597.53 00:17:12.005 [2024-12-06T13:56:11.409Z] =================================================================================================================== 00:17:12.005 [2024-12-06T13:56:11.409Z] Total : 16200.37 63.28 0.00 0.00 7894.72 6583.39 28597.53 00:17:12.005 { 00:17:12.005 "results": [ 00:17:12.005 { 00:17:12.005 "job": "nvme0n1", 00:17:12.005 "core_mask": "0x2", 00:17:12.005 "workload": "randread", 00:17:12.005 "status": "finished", 00:17:12.005 "queue_depth": 128, 00:17:12.005 "io_size": 4096, 00:17:12.005 "runtime": 2.006929, 00:17:12.005 "iops": 16200.373804952742, 00:17:12.005 "mibps": 63.28271017559665, 00:17:12.005 "io_failed": 0, 00:17:12.005 "io_timeout": 0, 00:17:12.005 "avg_latency_us": 7894.724791370164, 00:17:12.005 "min_latency_us": 6583.389090909091, 00:17:12.005 "max_latency_us": 28597.52727272727 00:17:12.005 } 00:17:12.005 ], 00:17:12.005 "core_count": 1 00:17:12.005 } 00:17:12.005 13:56:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:12.005 13:56:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:12.005 13:56:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:12.005 13:56:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:12.005 | select(.opcode=="crc32c") 00:17:12.005 | "\(.module_name) \(.executed)"' 00:17:12.005 13:56:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:12.005 13:56:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:12.005 13:56:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:12.005 13:56:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:12.005 13:56:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:12.005 13:56:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79925 00:17:12.005 13:56:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 79925 ']' 00:17:12.005 13:56:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 79925 00:17:12.005 13:56:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:17:12.005 13:56:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:12.005 13:56:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79925 00:17:12.263 killing process with pid 79925 00:17:12.263 Received shutdown signal, test time was about 2.000000 seconds 00:17:12.263 00:17:12.263 Latency(us) 00:17:12.263 [2024-12-06T13:56:11.667Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:12.263 [2024-12-06T13:56:11.667Z] =================================================================================================================== 00:17:12.263 [2024-12-06T13:56:11.667Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:12.263 13:56:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:12.263 13:56:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:12.263 13:56:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79925' 00:17:12.263 13:56:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 79925 00:17:12.263 13:56:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 79925 00:17:12.263 13:56:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:17:12.263 13:56:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:12.263 13:56:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:12.263 13:56:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:17:12.263 13:56:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:17:12.263 13:56:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:17:12.263 13:56:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:12.263 13:56:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79978 00:17:12.263 13:56:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:17:12.263 13:56:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79978 /var/tmp/bperf.sock 00:17:12.263 13:56:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 79978 ']' 00:17:12.263 13:56:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:12.264 13:56:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:12.264 13:56:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:12.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:12.264 13:56:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:12.264 13:56:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:12.522 [2024-12-06 13:56:11.676369] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:17:12.522 [2024-12-06 13:56:11.676865] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79978 ] 00:17:12.522 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:12.522 Zero copy mechanism will not be used. 00:17:12.522 [2024-12-06 13:56:11.823258] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:12.522 [2024-12-06 13:56:11.867187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:12.522 13:56:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:12.522 13:56:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:17:12.522 13:56:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:12.522 13:56:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:12.522 13:56:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:13.133 [2024-12-06 13:56:12.263368] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:13.133 13:56:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:13.133 13:56:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:13.391 nvme0n1 00:17:13.391 13:56:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:13.391 13:56:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:13.391 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:13.391 Zero copy mechanism will not be used. 00:17:13.391 Running I/O for 2 seconds... 00:17:15.704 8464.00 IOPS, 1058.00 MiB/s [2024-12-06T13:56:15.108Z] 8560.00 IOPS, 1070.00 MiB/s 00:17:15.704 Latency(us) 00:17:15.704 [2024-12-06T13:56:15.108Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:15.704 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:17:15.704 nvme0n1 : 2.00 8555.22 1069.40 0.00 0.00 1867.38 1653.29 6255.71 00:17:15.704 [2024-12-06T13:56:15.108Z] =================================================================================================================== 00:17:15.704 [2024-12-06T13:56:15.108Z] Total : 8555.22 1069.40 0.00 0.00 1867.38 1653.29 6255.71 00:17:15.704 { 00:17:15.704 "results": [ 00:17:15.704 { 00:17:15.704 "job": "nvme0n1", 00:17:15.704 "core_mask": "0x2", 00:17:15.704 "workload": "randread", 00:17:15.704 "status": "finished", 00:17:15.704 "queue_depth": 16, 00:17:15.704 "io_size": 131072, 00:17:15.704 "runtime": 2.002987, 00:17:15.704 "iops": 8555.222774785858, 00:17:15.704 "mibps": 1069.4028468482322, 00:17:15.704 "io_failed": 0, 00:17:15.704 "io_timeout": 0, 00:17:15.704 "avg_latency_us": 1867.3847313470842, 00:17:15.704 "min_latency_us": 1653.2945454545454, 00:17:15.704 "max_latency_us": 6255.709090909091 00:17:15.704 } 00:17:15.704 ], 00:17:15.704 "core_count": 1 00:17:15.704 } 00:17:15.704 13:56:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:15.704 13:56:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:15.704 13:56:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:15.704 13:56:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:15.704 13:56:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:15.704 | select(.opcode=="crc32c") 00:17:15.704 | "\(.module_name) \(.executed)"' 00:17:15.704 13:56:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:15.704 13:56:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:15.704 13:56:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:15.704 13:56:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:15.704 13:56:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79978 00:17:15.704 13:56:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 79978 ']' 00:17:15.704 13:56:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 79978 00:17:15.704 13:56:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:17:15.704 13:56:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:15.704 13:56:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79978 00:17:15.704 killing process with pid 79978 00:17:15.704 Received shutdown signal, test time was about 2.000000 seconds 00:17:15.704 00:17:15.704 Latency(us) 00:17:15.704 [2024-12-06T13:56:15.108Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:15.704 [2024-12-06T13:56:15.108Z] =================================================================================================================== 00:17:15.704 [2024-12-06T13:56:15.108Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:15.704 13:56:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:15.704 13:56:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:15.705 13:56:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79978' 00:17:15.705 13:56:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 79978 00:17:15.705 13:56:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 79978 00:17:15.966 13:56:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:17:15.966 13:56:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:15.966 13:56:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:15.966 13:56:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:17:15.966 13:56:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:17:15.966 13:56:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:17:15.966 13:56:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:15.966 13:56:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80025 00:17:15.966 13:56:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:17:15.966 13:56:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80025 /var/tmp/bperf.sock 00:17:15.966 13:56:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80025 ']' 00:17:15.966 13:56:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:15.966 13:56:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:15.966 13:56:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:15.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:15.966 13:56:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:15.966 13:56:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:15.966 [2024-12-06 13:56:15.320984] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:17:15.966 [2024-12-06 13:56:15.321402] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80025 ] 00:17:16.225 [2024-12-06 13:56:15.463169] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:16.225 [2024-12-06 13:56:15.513418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:16.225 13:56:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:16.225 13:56:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:17:16.225 13:56:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:16.225 13:56:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:16.225 13:56:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:16.483 [2024-12-06 13:56:15.822911] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:16.483 13:56:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:16.483 13:56:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:17.049 nvme0n1 00:17:17.049 13:56:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:17.049 13:56:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:17.049 Running I/O for 2 seconds... 00:17:18.924 19051.00 IOPS, 74.42 MiB/s [2024-12-06T13:56:18.328Z] 18606.00 IOPS, 72.68 MiB/s 00:17:18.924 Latency(us) 00:17:18.924 [2024-12-06T13:56:18.328Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:18.924 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:18.924 nvme0n1 : 2.00 18630.70 72.78 0.00 0.00 6865.04 6225.92 17158.52 00:17:18.924 [2024-12-06T13:56:18.328Z] =================================================================================================================== 00:17:18.924 [2024-12-06T13:56:18.328Z] Total : 18630.70 72.78 0.00 0.00 6865.04 6225.92 17158.52 00:17:18.924 { 00:17:18.924 "results": [ 00:17:18.924 { 00:17:18.924 "job": "nvme0n1", 00:17:18.924 "core_mask": "0x2", 00:17:18.924 "workload": "randwrite", 00:17:18.924 "status": "finished", 00:17:18.924 "queue_depth": 128, 00:17:18.924 "io_size": 4096, 00:17:18.924 "runtime": 2.004219, 00:17:18.924 "iops": 18630.69854142686, 00:17:18.924 "mibps": 72.77616617744867, 00:17:18.924 "io_failed": 0, 00:17:18.924 "io_timeout": 0, 00:17:18.924 "avg_latency_us": 6865.037206992258, 00:17:18.924 "min_latency_us": 6225.92, 00:17:18.924 "max_latency_us": 17158.516363636365 00:17:18.924 } 00:17:18.924 ], 00:17:18.924 "core_count": 1 00:17:18.924 } 00:17:18.924 13:56:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:18.924 13:56:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:18.924 13:56:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:18.924 13:56:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:18.924 13:56:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:18.924 | select(.opcode=="crc32c") 00:17:18.924 | "\(.module_name) \(.executed)"' 00:17:19.510 13:56:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:19.510 13:56:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:19.510 13:56:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:19.510 13:56:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:19.510 13:56:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80025 00:17:19.510 13:56:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80025 ']' 00:17:19.510 13:56:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80025 00:17:19.510 13:56:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:17:19.510 13:56:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:19.510 13:56:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80025 00:17:19.510 killing process with pid 80025 00:17:19.510 Received shutdown signal, test time was about 2.000000 seconds 00:17:19.510 00:17:19.510 Latency(us) 00:17:19.510 [2024-12-06T13:56:18.914Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:19.510 [2024-12-06T13:56:18.914Z] =================================================================================================================== 00:17:19.510 [2024-12-06T13:56:18.914Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:19.510 13:56:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:19.510 13:56:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:19.511 13:56:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80025' 00:17:19.511 13:56:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80025 00:17:19.511 13:56:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80025 00:17:19.511 13:56:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:17:19.511 13:56:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:19.511 13:56:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:19.511 13:56:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:17:19.511 13:56:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:17:19.511 13:56:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:17:19.511 13:56:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:19.511 13:56:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80079 00:17:19.511 13:56:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80079 /var/tmp/bperf.sock 00:17:19.511 13:56:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:17:19.511 13:56:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80079 ']' 00:17:19.511 13:56:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:19.511 13:56:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:19.511 13:56:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:19.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:19.511 13:56:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:19.511 13:56:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:19.511 [2024-12-06 13:56:18.875773] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:17:19.511 [2024-12-06 13:56:18.876315] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80079 ] 00:17:19.511 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:19.511 Zero copy mechanism will not be used. 00:17:19.770 [2024-12-06 13:56:19.022626] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:19.770 [2024-12-06 13:56:19.067783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:19.770 13:56:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:19.770 13:56:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:17:19.770 13:56:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:19.770 13:56:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:19.770 13:56:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:20.029 [2024-12-06 13:56:19.424876] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:20.288 13:56:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:20.289 13:56:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:20.548 nvme0n1 00:17:20.548 13:56:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:20.548 13:56:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:20.548 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:20.548 Zero copy mechanism will not be used. 00:17:20.548 Running I/O for 2 seconds... 00:17:22.494 7121.00 IOPS, 890.12 MiB/s [2024-12-06T13:56:22.158Z] 7242.00 IOPS, 905.25 MiB/s 00:17:22.754 Latency(us) 00:17:22.754 [2024-12-06T13:56:22.158Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:22.754 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:17:22.754 nvme0n1 : 2.00 7239.26 904.91 0.00 0.00 2205.19 1906.50 10843.23 00:17:22.754 [2024-12-06T13:56:22.158Z] =================================================================================================================== 00:17:22.754 [2024-12-06T13:56:22.158Z] Total : 7239.26 904.91 0.00 0.00 2205.19 1906.50 10843.23 00:17:22.754 { 00:17:22.754 "results": [ 00:17:22.754 { 00:17:22.754 "job": "nvme0n1", 00:17:22.754 "core_mask": "0x2", 00:17:22.754 "workload": "randwrite", 00:17:22.754 "status": "finished", 00:17:22.754 "queue_depth": 16, 00:17:22.754 "io_size": 131072, 00:17:22.754 "runtime": 2.002967, 00:17:22.754 "iops": 7239.260556963744, 00:17:22.754 "mibps": 904.907569620468, 00:17:22.754 "io_failed": 0, 00:17:22.754 "io_timeout": 0, 00:17:22.754 "avg_latency_us": 2205.1857334169276, 00:17:22.754 "min_latency_us": 1906.5018181818182, 00:17:22.754 "max_latency_us": 10843.229090909092 00:17:22.754 } 00:17:22.754 ], 00:17:22.754 "core_count": 1 00:17:22.754 } 00:17:22.754 13:56:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:22.754 13:56:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:22.754 13:56:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:22.754 13:56:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:22.754 | select(.opcode=="crc32c") 00:17:22.754 | "\(.module_name) \(.executed)"' 00:17:22.754 13:56:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:23.013 13:56:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:23.013 13:56:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:23.013 13:56:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:23.013 13:56:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:23.013 13:56:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80079 00:17:23.013 13:56:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80079 ']' 00:17:23.013 13:56:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80079 00:17:23.013 13:56:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:17:23.013 13:56:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:23.013 13:56:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80079 00:17:23.013 killing process with pid 80079 00:17:23.013 Received shutdown signal, test time was about 2.000000 seconds 00:17:23.013 00:17:23.013 Latency(us) 00:17:23.013 [2024-12-06T13:56:22.417Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:23.013 [2024-12-06T13:56:22.417Z] =================================================================================================================== 00:17:23.013 [2024-12-06T13:56:22.417Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:23.013 13:56:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:23.013 13:56:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:23.013 13:56:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80079' 00:17:23.013 13:56:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80079 00:17:23.013 13:56:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80079 00:17:23.272 13:56:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 79901 00:17:23.272 13:56:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 79901 ']' 00:17:23.272 13:56:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 79901 00:17:23.272 13:56:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:17:23.272 13:56:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:23.272 13:56:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79901 00:17:23.272 killing process with pid 79901 00:17:23.272 13:56:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:23.272 13:56:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:23.272 13:56:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79901' 00:17:23.272 13:56:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 79901 00:17:23.272 13:56:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 79901 00:17:23.272 ************************************ 00:17:23.272 END TEST nvmf_digest_clean 00:17:23.272 ************************************ 00:17:23.272 00:17:23.272 real 0m15.429s 00:17:23.272 user 0m29.839s 00:17:23.272 sys 0m4.633s 00:17:23.272 13:56:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:23.272 13:56:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:23.531 13:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:17:23.531 13:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:23.531 13:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:23.531 13:56:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:17:23.531 ************************************ 00:17:23.531 START TEST nvmf_digest_error 00:17:23.531 ************************************ 00:17:23.531 13:56:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:17:23.531 13:56:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:17:23.531 13:56:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:23.531 13:56:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:23.531 13:56:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:23.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:23.531 13:56:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=80155 00:17:23.531 13:56:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 80155 00:17:23.531 13:56:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80155 ']' 00:17:23.531 13:56:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:17:23.531 13:56:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:23.531 13:56:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:23.531 13:56:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:23.531 13:56:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:23.531 13:56:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:23.531 [2024-12-06 13:56:22.782722] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:17:23.531 [2024-12-06 13:56:22.782815] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:23.531 [2024-12-06 13:56:22.929826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:23.791 [2024-12-06 13:56:22.973209] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:23.791 [2024-12-06 13:56:22.973256] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:23.791 [2024-12-06 13:56:22.973266] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:23.791 [2024-12-06 13:56:22.973272] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:23.791 [2024-12-06 13:56:22.973278] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:23.791 [2024-12-06 13:56:22.973590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:24.359 13:56:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:24.359 13:56:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:17:24.359 13:56:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:24.359 13:56:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:24.359 13:56:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:24.359 13:56:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:24.359 13:56:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:17:24.359 13:56:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.359 13:56:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:24.359 [2024-12-06 13:56:23.718010] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:17:24.359 13:56:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.359 13:56:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:17:24.359 13:56:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:17:24.359 13:56:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.359 13:56:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:24.618 [2024-12-06 13:56:23.778796] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:24.618 null0 00:17:24.618 [2024-12-06 13:56:23.828699] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:24.618 [2024-12-06 13:56:23.852826] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:24.618 13:56:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.618 13:56:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:17:24.618 13:56:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:17:24.618 13:56:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:17:24.618 13:56:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:17:24.618 13:56:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:17:24.618 13:56:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80187 00:17:24.618 13:56:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:17:24.618 13:56:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80187 /var/tmp/bperf.sock 00:17:24.618 13:56:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80187 ']' 00:17:24.619 13:56:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:24.619 13:56:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:24.619 13:56:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:24.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:24.619 13:56:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:24.619 13:56:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:24.619 [2024-12-06 13:56:23.904816] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:17:24.619 [2024-12-06 13:56:23.905246] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80187 ] 00:17:24.879 [2024-12-06 13:56:24.050354] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:24.879 [2024-12-06 13:56:24.105129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:24.879 [2024-12-06 13:56:24.163681] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:24.879 13:56:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:24.879 13:56:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:17:24.879 13:56:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:24.879 13:56:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:25.137 13:56:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:25.137 13:56:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.137 13:56:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:25.396 13:56:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.396 13:56:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:25.396 13:56:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:25.655 nvme0n1 00:17:25.655 13:56:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:17:25.655 13:56:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.655 13:56:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:25.655 13:56:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.655 13:56:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:25.655 13:56:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:25.655 Running I/O for 2 seconds... 00:17:25.655 [2024-12-06 13:56:25.003842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:25.655 [2024-12-06 13:56:25.003939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.655 [2024-12-06 13:56:25.003954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.655 [2024-12-06 13:56:25.019880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:25.655 [2024-12-06 13:56:25.019919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.655 [2024-12-06 13:56:25.019950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.655 [2024-12-06 13:56:25.036094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:25.655 [2024-12-06 13:56:25.036164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.655 [2024-12-06 13:56:25.036195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.655 [2024-12-06 13:56:25.052140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:25.655 [2024-12-06 13:56:25.052389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.655 [2024-12-06 13:56:25.052410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.913 [2024-12-06 13:56:25.069023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:25.913 [2024-12-06 13:56:25.069265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.913 [2024-12-06 13:56:25.069409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.913 [2024-12-06 13:56:25.085880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:25.913 [2024-12-06 13:56:25.086150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.913 [2024-12-06 13:56:25.086304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.913 [2024-12-06 13:56:25.101652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:25.913 [2024-12-06 13:56:25.101886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.913 [2024-12-06 13:56:25.102011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.913 [2024-12-06 13:56:25.116961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:25.913 [2024-12-06 13:56:25.117224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.913 [2024-12-06 13:56:25.117411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.913 [2024-12-06 13:56:25.132948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:25.913 [2024-12-06 13:56:25.133187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:2544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.913 [2024-12-06 13:56:25.133377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.913 [2024-12-06 13:56:25.149614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:25.913 [2024-12-06 13:56:25.149843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:22511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.913 [2024-12-06 13:56:25.149967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.913 [2024-12-06 13:56:25.166868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:25.913 [2024-12-06 13:56:25.167138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:5783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.913 [2024-12-06 13:56:25.167268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.913 [2024-12-06 13:56:25.184091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:25.913 [2024-12-06 13:56:25.184383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:25474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.913 [2024-12-06 13:56:25.184578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.913 [2024-12-06 13:56:25.205758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:25.913 [2024-12-06 13:56:25.205830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:5728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.913 [2024-12-06 13:56:25.205845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.913 [2024-12-06 13:56:25.225383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:25.913 [2024-12-06 13:56:25.225463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:23778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.913 [2024-12-06 13:56:25.225492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.913 [2024-12-06 13:56:25.243771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:25.913 [2024-12-06 13:56:25.243832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.913 [2024-12-06 13:56:25.243848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.913 [2024-12-06 13:56:25.262825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:25.913 [2024-12-06 13:56:25.263080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:3871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.913 [2024-12-06 13:56:25.263101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.913 [2024-12-06 13:56:25.281832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:25.913 [2024-12-06 13:56:25.282091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:6230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.913 [2024-12-06 13:56:25.282127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.913 [2024-12-06 13:56:25.300762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:25.913 [2024-12-06 13:56:25.300840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:85 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.913 [2024-12-06 13:56:25.300855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.171 [2024-12-06 13:56:25.319101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:26.171 [2024-12-06 13:56:25.319192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:8500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.171 [2024-12-06 13:56:25.319222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.171 [2024-12-06 13:56:25.338210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:26.171 [2024-12-06 13:56:25.338267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:2710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.171 [2024-12-06 13:56:25.338298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.171 [2024-12-06 13:56:25.356961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:26.171 [2024-12-06 13:56:25.357043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:7285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.171 [2024-12-06 13:56:25.357058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.171 [2024-12-06 13:56:25.375241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:26.171 [2024-12-06 13:56:25.375305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:4070 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.172 [2024-12-06 13:56:25.375319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.172 [2024-12-06 13:56:25.393454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:26.172 [2024-12-06 13:56:25.393533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:10266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.172 [2024-12-06 13:56:25.393563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.172 [2024-12-06 13:56:25.411644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:26.172 [2024-12-06 13:56:25.411726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:1310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.172 [2024-12-06 13:56:25.411741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.172 [2024-12-06 13:56:25.429715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:26.172 [2024-12-06 13:56:25.430072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:1172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.172 [2024-12-06 13:56:25.430094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.172 [2024-12-06 13:56:25.449071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:26.172 [2024-12-06 13:56:25.449185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.172 [2024-12-06 13:56:25.449202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.172 [2024-12-06 13:56:25.468448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:26.172 [2024-12-06 13:56:25.468529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:9785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.172 [2024-12-06 13:56:25.468558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.172 [2024-12-06 13:56:25.487151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:26.172 [2024-12-06 13:56:25.487223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:16284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.172 [2024-12-06 13:56:25.487238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.172 [2024-12-06 13:56:25.505967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:26.172 [2024-12-06 13:56:25.506314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:22963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.172 [2024-12-06 13:56:25.506334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.172 [2024-12-06 13:56:25.525255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:26.172 [2024-12-06 13:56:25.525559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:3409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.172 [2024-12-06 13:56:25.525578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.172 [2024-12-06 13:56:25.544468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:26.172 [2024-12-06 13:56:25.544551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:8045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.172 [2024-12-06 13:56:25.544565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.172 [2024-12-06 13:56:25.559206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:26.172 [2024-12-06 13:56:25.559270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:3755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.172 [2024-12-06 13:56:25.559285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.430 [2024-12-06 13:56:25.573601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:26.430 [2024-12-06 13:56:25.573912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:13830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.430 [2024-12-06 13:56:25.573932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.430 [2024-12-06 13:56:25.589216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:26.430 [2024-12-06 13:56:25.589271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:18396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.430 [2024-12-06 13:56:25.589286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.430 [2024-12-06 13:56:25.604861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:26.430 [2024-12-06 13:56:25.604909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:3009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.430 [2024-12-06 13:56:25.604941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.430 [2024-12-06 13:56:25.620288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:26.430 [2024-12-06 13:56:25.620343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:5264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.430 [2024-12-06 13:56:25.620375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.430 [2024-12-06 13:56:25.635095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:26.430 [2024-12-06 13:56:25.635235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:7922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.430 [2024-12-06 13:56:25.635251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.430 [2024-12-06 13:56:25.649697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:26.430 [2024-12-06 13:56:25.649788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.430 [2024-12-06 13:56:25.649802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.430 [2024-12-06 13:56:25.664088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:26.430 [2024-12-06 13:56:25.664177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.430 [2024-12-06 13:56:25.664207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.430 [2024-12-06 13:56:25.677977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:26.431 [2024-12-06 13:56:25.678032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.431 [2024-12-06 13:56:25.678062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.431 [2024-12-06 13:56:25.691996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:26.431 [2024-12-06 13:56:25.692052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.431 [2024-12-06 13:56:25.692082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.431 [2024-12-06 13:56:25.705880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:26.431 [2024-12-06 13:56:25.705950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.431 [2024-12-06 13:56:25.705963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.431 [2024-12-06 13:56:25.719818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:26.431 [2024-12-06 13:56:25.720158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.431 [2024-12-06 13:56:25.720179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.431 [2024-12-06 13:56:25.733986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:26.431 [2024-12-06 13:56:25.734040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.431 [2024-12-06 13:56:25.734069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.431 [2024-12-06 13:56:25.748064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:26.431 [2024-12-06 13:56:25.748144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.431 [2024-12-06 13:56:25.748158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.431 [2024-12-06 13:56:25.761964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:26.431 [2024-12-06 13:56:25.762340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.431 [2024-12-06 13:56:25.762362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.431 [2024-12-06 13:56:25.777473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:26.431 [2024-12-06 13:56:25.777546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:14532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.431 [2024-12-06 13:56:25.777577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.431 [2024-12-06 13:56:25.793291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:26.431 [2024-12-06 13:56:25.793347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:13667 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.431 [2024-12-06 13:56:25.793377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.431 [2024-12-06 13:56:25.807880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:26.431 [2024-12-06 13:56:25.808219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:2298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.431 [2024-12-06 13:56:25.808238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.431 [2024-12-06 13:56:25.822274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:26.431 [2024-12-06 13:56:25.822328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:13547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.431 [2024-12-06 13:56:25.822358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.690 [2024-12-06 13:56:25.836664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:26.690 [2024-12-06 13:56:25.836735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:1997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.690 [2024-12-06 13:56:25.836750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.690 [2024-12-06 13:56:25.850653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:26.690 [2024-12-06 13:56:25.850723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:8495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.690 [2024-12-06 13:56:25.850736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.690 [2024-12-06 13:56:25.866200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:26.690 [2024-12-06 13:56:25.866297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:10218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.690 [2024-12-06 13:56:25.866314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.690 [2024-12-06 13:56:25.882060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:26.690 [2024-12-06 13:56:25.882139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:25182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.690 [2024-12-06 13:56:25.882152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.690 [2024-12-06 13:56:25.896042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:26.691 [2024-12-06 13:56:25.896109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:15104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.691 [2024-12-06 13:56:25.896133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.691 [2024-12-06 13:56:25.909956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:26.691 [2024-12-06 13:56:25.910022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:10886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.691 [2024-12-06 13:56:25.910035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.691 [2024-12-06 13:56:25.923860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:26.691 [2024-12-06 13:56:25.923928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:3123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.691 [2024-12-06 13:56:25.923941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.691 [2024-12-06 13:56:25.937533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:26.691 [2024-12-06 13:56:25.937600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:4988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.691 [2024-12-06 13:56:25.937612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.691 [2024-12-06 13:56:25.951269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:26.691 [2024-12-06 13:56:25.951322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.691 [2024-12-06 13:56:25.951344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.691 [2024-12-06 13:56:25.965049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:26.691 [2024-12-06 13:56:25.965122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:24529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.691 [2024-12-06 13:56:25.965138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.691 [2024-12-06 13:56:25.978740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:26.691 [2024-12-06 13:56:25.978806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:11353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.691 [2024-12-06 13:56:25.978818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.691 15563.00 IOPS, 60.79 MiB/s [2024-12-06T13:56:26.095Z] [2024-12-06 13:56:25.992546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:26.691 [2024-12-06 13:56:25.992608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.691 [2024-12-06 13:56:25.992622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.691 [2024-12-06 13:56:26.006198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:26.691 [2024-12-06 13:56:26.006264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:4289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.691 [2024-12-06 13:56:26.006276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.691 [2024-12-06 13:56:26.020548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:26.691 [2024-12-06 13:56:26.020611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:25384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.691 [2024-12-06 13:56:26.020623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.691 [2024-12-06 13:56:26.034442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:26.691 [2024-12-06 13:56:26.034510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:13328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.691 [2024-12-06 13:56:26.034522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.691 [2024-12-06 13:56:26.054537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:26.691 [2024-12-06 13:56:26.054612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:3427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.691 [2024-12-06 13:56:26.054625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.691 [2024-12-06 13:56:26.068676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:26.691 [2024-12-06 13:56:26.068745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:8759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.691 [2024-12-06 13:56:26.068757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.691 [2024-12-06 13:56:26.082824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:26.691 [2024-12-06 13:56:26.082893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:2129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.691 [2024-12-06 13:56:26.082906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.951 [2024-12-06 13:56:26.097166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:26.951 [2024-12-06 13:56:26.097214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:15931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.951 [2024-12-06 13:56:26.097225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.951 [2024-12-06 13:56:26.111201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:26.951 [2024-12-06 13:56:26.111269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:10556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.951 [2024-12-06 13:56:26.111283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.951 [2024-12-06 13:56:26.125326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:26.951 [2024-12-06 13:56:26.125382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:17828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.951 [2024-12-06 13:56:26.125394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.951 [2024-12-06 13:56:26.140159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:26.951 [2024-12-06 13:56:26.140218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:18118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.951 [2024-12-06 13:56:26.140230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.951 [2024-12-06 13:56:26.155252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:26.951 [2024-12-06 13:56:26.155297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:25326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.951 [2024-12-06 13:56:26.155309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.951 [2024-12-06 13:56:26.170944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:26.951 [2024-12-06 13:56:26.170990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:25309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.951 [2024-12-06 13:56:26.171006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.951 [2024-12-06 13:56:26.186348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:26.951 [2024-12-06 13:56:26.186379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:19189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.951 [2024-12-06 13:56:26.186389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.951 [2024-12-06 13:56:26.202049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:26.951 [2024-12-06 13:56:26.202097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:13227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.951 [2024-12-06 13:56:26.202108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.951 [2024-12-06 13:56:26.217714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:26.951 [2024-12-06 13:56:26.217763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:21868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.951 [2024-12-06 13:56:26.217775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.951 [2024-12-06 13:56:26.233466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:26.951 [2024-12-06 13:56:26.233515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:7045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.951 [2024-12-06 13:56:26.233526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.951 [2024-12-06 13:56:26.247671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:26.951 [2024-12-06 13:56:26.247720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:18502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.951 [2024-12-06 13:56:26.247731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.951 [2024-12-06 13:56:26.261477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:26.951 [2024-12-06 13:56:26.261525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:17369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.951 [2024-12-06 13:56:26.261536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.951 [2024-12-06 13:56:26.275503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:26.951 [2024-12-06 13:56:26.275536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:1577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.951 [2024-12-06 13:56:26.275548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.951 [2024-12-06 13:56:26.289393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:26.951 [2024-12-06 13:56:26.289441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:20260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.951 [2024-12-06 13:56:26.289452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.951 [2024-12-06 13:56:26.303298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:26.951 [2024-12-06 13:56:26.303328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:25568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.951 [2024-12-06 13:56:26.303367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.951 [2024-12-06 13:56:26.316959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:26.951 [2024-12-06 13:56:26.316989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:3545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.951 [2024-12-06 13:56:26.317000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.951 [2024-12-06 13:56:26.330883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:26.951 [2024-12-06 13:56:26.330915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:14535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.951 [2024-12-06 13:56:26.330926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:26.951 [2024-12-06 13:56:26.345200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:26.951 [2024-12-06 13:56:26.345247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.951 [2024-12-06 13:56:26.345259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.210 [2024-12-06 13:56:26.360145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:27.210 [2024-12-06 13:56:26.360185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.210 [2024-12-06 13:56:26.360197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.210 [2024-12-06 13:56:26.374436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:27.210 [2024-12-06 13:56:26.374466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.210 [2024-12-06 13:56:26.374477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.210 [2024-12-06 13:56:26.388561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:27.210 [2024-12-06 13:56:26.388591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.210 [2024-12-06 13:56:26.388602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.210 [2024-12-06 13:56:26.402811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:27.210 [2024-12-06 13:56:26.402840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.210 [2024-12-06 13:56:26.402851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.210 [2024-12-06 13:56:26.417676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:27.210 [2024-12-06 13:56:26.417707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.210 [2024-12-06 13:56:26.417717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.210 [2024-12-06 13:56:26.432769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:27.210 [2024-12-06 13:56:26.432816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.210 [2024-12-06 13:56:26.432828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.210 [2024-12-06 13:56:26.447831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:27.210 [2024-12-06 13:56:26.447879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.210 [2024-12-06 13:56:26.447890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.210 [2024-12-06 13:56:26.463076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:27.210 [2024-12-06 13:56:26.463116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:22903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.210 [2024-12-06 13:56:26.463128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.211 [2024-12-06 13:56:26.477830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:27.211 [2024-12-06 13:56:26.477876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:17456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.211 [2024-12-06 13:56:26.477887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.211 [2024-12-06 13:56:26.492242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:27.211 [2024-12-06 13:56:26.492289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:14169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.211 [2024-12-06 13:56:26.492299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.211 [2024-12-06 13:56:26.506842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:27.211 [2024-12-06 13:56:26.506889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:8741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.211 [2024-12-06 13:56:26.506900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.211 [2024-12-06 13:56:26.521428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:27.211 [2024-12-06 13:56:26.521459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:8206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.211 [2024-12-06 13:56:26.521469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.211 [2024-12-06 13:56:26.535803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:27.211 [2024-12-06 13:56:26.535833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:9639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.211 [2024-12-06 13:56:26.535846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.211 [2024-12-06 13:56:26.550198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:27.211 [2024-12-06 13:56:26.550246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:8614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.211 [2024-12-06 13:56:26.550258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.211 [2024-12-06 13:56:26.564685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:27.211 [2024-12-06 13:56:26.564714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:17008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.211 [2024-12-06 13:56:26.564724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.211 [2024-12-06 13:56:26.578954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:27.211 [2024-12-06 13:56:26.578985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:11161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.211 [2024-12-06 13:56:26.578996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.211 [2024-12-06 13:56:26.593470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:27.211 [2024-12-06 13:56:26.593532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:14771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.211 [2024-12-06 13:56:26.593544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.211 [2024-12-06 13:56:26.608150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:27.211 [2024-12-06 13:56:26.608188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:8634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.211 [2024-12-06 13:56:26.608199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.471 [2024-12-06 13:56:26.623876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:27.471 [2024-12-06 13:56:26.623924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:10397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.471 [2024-12-06 13:56:26.623936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.471 [2024-12-06 13:56:26.639911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:27.471 [2024-12-06 13:56:26.639941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:1576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.471 [2024-12-06 13:56:26.639950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.471 [2024-12-06 13:56:26.655695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:27.471 [2024-12-06 13:56:26.655742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:9661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.471 [2024-12-06 13:56:26.655754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.471 [2024-12-06 13:56:26.671489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:27.471 [2024-12-06 13:56:26.671523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.471 [2024-12-06 13:56:26.671535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.471 [2024-12-06 13:56:26.687243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:27.471 [2024-12-06 13:56:26.687288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:3442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.471 [2024-12-06 13:56:26.687315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.471 [2024-12-06 13:56:26.702256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:27.471 [2024-12-06 13:56:26.702288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:3707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.471 [2024-12-06 13:56:26.702299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.471 [2024-12-06 13:56:26.717315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:27.471 [2024-12-06 13:56:26.717361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:8547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.471 [2024-12-06 13:56:26.717373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.471 [2024-12-06 13:56:26.732332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:27.471 [2024-12-06 13:56:26.732378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:7555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.471 [2024-12-06 13:56:26.732389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.471 [2024-12-06 13:56:26.747488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:27.471 [2024-12-06 13:56:26.747521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:22243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.471 [2024-12-06 13:56:26.747532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.471 [2024-12-06 13:56:26.762336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:27.471 [2024-12-06 13:56:26.762382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:14848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.471 [2024-12-06 13:56:26.762425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.471 [2024-12-06 13:56:26.777289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:27.471 [2024-12-06 13:56:26.777334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:24686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.471 [2024-12-06 13:56:26.777345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.471 [2024-12-06 13:56:26.792704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:27.471 [2024-12-06 13:56:26.792750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:7063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.471 [2024-12-06 13:56:26.792762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.471 [2024-12-06 13:56:26.809308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:27.471 [2024-12-06 13:56:26.809354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:7697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.471 [2024-12-06 13:56:26.809364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.471 [2024-12-06 13:56:26.825314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:27.471 [2024-12-06 13:56:26.825361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:4825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.471 [2024-12-06 13:56:26.825372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.471 [2024-12-06 13:56:26.840082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:27.471 [2024-12-06 13:56:26.840137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:5340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.471 [2024-12-06 13:56:26.840148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.471 [2024-12-06 13:56:26.855078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:27.471 [2024-12-06 13:56:26.855117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:2095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.471 [2024-12-06 13:56:26.855127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.471 [2024-12-06 13:56:26.869614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:27.471 [2024-12-06 13:56:26.869645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:19793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.471 [2024-12-06 13:56:26.869655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.731 [2024-12-06 13:56:26.884572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:27.731 [2024-12-06 13:56:26.884620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:3539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.731 [2024-12-06 13:56:26.884631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.731 [2024-12-06 13:56:26.899224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:27.731 [2024-12-06 13:56:26.899254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:24791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.731 [2024-12-06 13:56:26.899264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.731 [2024-12-06 13:56:26.913638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:27.731 [2024-12-06 13:56:26.913685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:1204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.731 [2024-12-06 13:56:26.913697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.731 [2024-12-06 13:56:26.928313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:27.731 [2024-12-06 13:56:26.928344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:16934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.731 [2024-12-06 13:56:26.928354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.731 [2024-12-06 13:56:26.943243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:27.731 [2024-12-06 13:56:26.943273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:9327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.731 [2024-12-06 13:56:26.943283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.731 [2024-12-06 13:56:26.957831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:27.731 [2024-12-06 13:56:26.957861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:5307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.731 [2024-12-06 13:56:26.957872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.731 [2024-12-06 13:56:26.972271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:27.731 [2024-12-06 13:56:26.972301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:13883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.731 [2024-12-06 13:56:26.972312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.731 16384.00 IOPS, 64.00 MiB/s [2024-12-06T13:56:27.135Z] [2024-12-06 13:56:26.988046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6a7b30) 00:17:27.731 [2024-12-06 13:56:26.988076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:13124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.731 [2024-12-06 13:56:26.988087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:27.731 00:17:27.731 Latency(us) 00:17:27.731 [2024-12-06T13:56:27.135Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:27.731 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:17:27.731 nvme0n1 : 2.01 16378.91 63.98 0.00 0.00 7808.26 3842.79 31933.91 00:17:27.731 [2024-12-06T13:56:27.135Z] =================================================================================================================== 00:17:27.731 [2024-12-06T13:56:27.135Z] Total : 16378.91 63.98 0.00 0.00 7808.26 3842.79 31933.91 00:17:27.731 { 00:17:27.731 "results": [ 00:17:27.731 { 00:17:27.731 "job": "nvme0n1", 00:17:27.731 "core_mask": "0x2", 00:17:27.731 "workload": "randread", 00:17:27.731 "status": "finished", 00:17:27.731 "queue_depth": 128, 00:17:27.731 "io_size": 4096, 00:17:27.731 "runtime": 2.008436, 00:17:27.731 "iops": 16378.913741836932, 00:17:27.731 "mibps": 63.980131804050515, 00:17:27.731 "io_failed": 0, 00:17:27.731 "io_timeout": 0, 00:17:27.731 "avg_latency_us": 7808.26170498762, 00:17:27.731 "min_latency_us": 3842.7927272727275, 00:17:27.731 "max_latency_us": 31933.905454545453 00:17:27.731 } 00:17:27.731 ], 00:17:27.731 "core_count": 1 00:17:27.731 } 00:17:27.731 13:56:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:27.731 13:56:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:27.731 13:56:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:27.731 | .driver_specific 00:17:27.731 | .nvme_error 00:17:27.731 | .status_code 00:17:27.731 | .command_transient_transport_error' 00:17:27.731 13:56:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:27.990 13:56:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 129 > 0 )) 00:17:27.991 13:56:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80187 00:17:27.991 13:56:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80187 ']' 00:17:27.991 13:56:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80187 00:17:27.991 13:56:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:17:27.991 13:56:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:27.991 13:56:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80187 00:17:27.991 13:56:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:27.991 13:56:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:27.991 killing process with pid 80187 00:17:27.991 13:56:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80187' 00:17:27.991 Received shutdown signal, test time was about 2.000000 seconds 00:17:27.991 00:17:27.991 Latency(us) 00:17:27.991 [2024-12-06T13:56:27.395Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:27.991 [2024-12-06T13:56:27.395Z] =================================================================================================================== 00:17:27.991 [2024-12-06T13:56:27.395Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:27.991 13:56:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80187 00:17:27.991 13:56:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80187 00:17:28.250 13:56:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:17:28.250 13:56:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:17:28.250 13:56:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:17:28.250 13:56:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:17:28.250 13:56:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:17:28.251 13:56:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80238 00:17:28.251 13:56:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80238 /var/tmp/bperf.sock 00:17:28.251 13:56:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:17:28.251 13:56:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80238 ']' 00:17:28.251 13:56:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:28.251 13:56:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:28.251 13:56:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:28.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:28.251 13:56:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:28.251 13:56:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:28.251 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:28.251 Zero copy mechanism will not be used. 00:17:28.251 [2024-12-06 13:56:27.571869] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:17:28.251 [2024-12-06 13:56:27.571975] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80238 ] 00:17:28.511 [2024-12-06 13:56:27.716794] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:28.511 [2024-12-06 13:56:27.760387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:28.511 [2024-12-06 13:56:27.815825] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:29.449 13:56:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:29.449 13:56:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:17:29.449 13:56:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:29.449 13:56:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:29.449 13:56:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:29.449 13:56:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.449 13:56:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:29.449 13:56:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.449 13:56:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:29.449 13:56:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:29.708 nvme0n1 00:17:29.968 13:56:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:17:29.968 13:56:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.968 13:56:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:29.968 13:56:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.968 13:56:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:29.968 13:56:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:29.968 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:29.968 Zero copy mechanism will not be used. 00:17:29.968 Running I/O for 2 seconds... 00:17:29.968 [2024-12-06 13:56:29.262433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:29.968 [2024-12-06 13:56:29.262494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.968 [2024-12-06 13:56:29.262508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:29.968 [2024-12-06 13:56:29.266803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:29.968 [2024-12-06 13:56:29.266838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.968 [2024-12-06 13:56:29.266851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:29.968 [2024-12-06 13:56:29.270973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:29.968 [2024-12-06 13:56:29.271008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.968 [2024-12-06 13:56:29.271021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:29.968 [2024-12-06 13:56:29.274949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:29.968 [2024-12-06 13:56:29.274990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.968 [2024-12-06 13:56:29.275001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.969 [2024-12-06 13:56:29.278940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:29.969 [2024-12-06 13:56:29.278972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.969 [2024-12-06 13:56:29.278983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:29.969 [2024-12-06 13:56:29.282987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:29.969 [2024-12-06 13:56:29.283018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.969 [2024-12-06 13:56:29.283029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:29.969 [2024-12-06 13:56:29.287006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:29.969 [2024-12-06 13:56:29.287037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.969 [2024-12-06 13:56:29.287048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:29.969 [2024-12-06 13:56:29.291058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:29.969 [2024-12-06 13:56:29.291090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.969 [2024-12-06 13:56:29.291112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.969 [2024-12-06 13:56:29.294953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:29.969 [2024-12-06 13:56:29.294985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.969 [2024-12-06 13:56:29.294996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:29.969 [2024-12-06 13:56:29.298977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:29.969 [2024-12-06 13:56:29.299008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.969 [2024-12-06 13:56:29.299019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:29.969 [2024-12-06 13:56:29.303071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:29.969 [2024-12-06 13:56:29.303114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.969 [2024-12-06 13:56:29.303126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:29.969 [2024-12-06 13:56:29.307139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:29.969 [2024-12-06 13:56:29.307179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.969 [2024-12-06 13:56:29.307190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.969 [2024-12-06 13:56:29.311166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:29.969 [2024-12-06 13:56:29.311207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.969 [2024-12-06 13:56:29.311219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:29.969 [2024-12-06 13:56:29.314977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:29.969 [2024-12-06 13:56:29.315008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.969 [2024-12-06 13:56:29.315019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:29.969 [2024-12-06 13:56:29.319081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:29.969 [2024-12-06 13:56:29.319123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.969 [2024-12-06 13:56:29.319136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:29.969 [2024-12-06 13:56:29.323125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:29.969 [2024-12-06 13:56:29.323166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.969 [2024-12-06 13:56:29.323178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.969 [2024-12-06 13:56:29.327129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:29.969 [2024-12-06 13:56:29.327171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.969 [2024-12-06 13:56:29.327184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:29.969 [2024-12-06 13:56:29.331128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:29.969 [2024-12-06 13:56:29.331169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.969 [2024-12-06 13:56:29.331181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:29.969 [2024-12-06 13:56:29.335003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:29.969 [2024-12-06 13:56:29.335034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.969 [2024-12-06 13:56:29.335045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:29.969 [2024-12-06 13:56:29.339102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:29.969 [2024-12-06 13:56:29.339143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.969 [2024-12-06 13:56:29.339154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.969 [2024-12-06 13:56:29.343105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:29.969 [2024-12-06 13:56:29.343146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.969 [2024-12-06 13:56:29.343157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:29.969 [2024-12-06 13:56:29.347141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:29.969 [2024-12-06 13:56:29.347182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.969 [2024-12-06 13:56:29.347194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:29.969 [2024-12-06 13:56:29.351015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:29.969 [2024-12-06 13:56:29.351046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.969 [2024-12-06 13:56:29.351057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:29.969 [2024-12-06 13:56:29.355085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:29.969 [2024-12-06 13:56:29.355130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.969 [2024-12-06 13:56:29.355142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.969 [2024-12-06 13:56:29.359063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:29.969 [2024-12-06 13:56:29.359109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.969 [2024-12-06 13:56:29.359123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:29.969 [2024-12-06 13:56:29.362944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:29.969 [2024-12-06 13:56:29.362975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.969 [2024-12-06 13:56:29.362986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:29.969 [2024-12-06 13:56:29.367076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:29.969 [2024-12-06 13:56:29.367120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.969 [2024-12-06 13:56:29.367132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:30.230 [2024-12-06 13:56:29.371195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.230 [2024-12-06 13:56:29.371224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.230 [2024-12-06 13:56:29.371234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:30.230 [2024-12-06 13:56:29.375217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.230 [2024-12-06 13:56:29.375246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.230 [2024-12-06 13:56:29.375257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:30.230 [2024-12-06 13:56:29.379179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.230 [2024-12-06 13:56:29.379208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.230 [2024-12-06 13:56:29.379218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:30.230 [2024-12-06 13:56:29.383177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.230 [2024-12-06 13:56:29.383207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.230 [2024-12-06 13:56:29.383219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:30.230 [2024-12-06 13:56:29.387422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.230 [2024-12-06 13:56:29.387454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.230 [2024-12-06 13:56:29.387465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:30.230 [2024-12-06 13:56:29.391372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.230 [2024-12-06 13:56:29.391404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.230 [2024-12-06 13:56:29.391416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:30.230 [2024-12-06 13:56:29.395497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.231 [2024-12-06 13:56:29.395560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.231 [2024-12-06 13:56:29.395572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:30.231 [2024-12-06 13:56:29.399305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.231 [2024-12-06 13:56:29.399424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.231 [2024-12-06 13:56:29.399437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:30.231 [2024-12-06 13:56:29.403287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.231 [2024-12-06 13:56:29.403316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.231 [2024-12-06 13:56:29.403327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:30.231 [2024-12-06 13:56:29.407328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.231 [2024-12-06 13:56:29.407385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.231 [2024-12-06 13:56:29.407396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:30.231 [2024-12-06 13:56:29.411245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.231 [2024-12-06 13:56:29.411274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.231 [2024-12-06 13:56:29.411284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:30.231 [2024-12-06 13:56:29.415277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.231 [2024-12-06 13:56:29.415306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.231 [2024-12-06 13:56:29.415317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:30.231 [2024-12-06 13:56:29.419204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.231 [2024-12-06 13:56:29.419233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.231 [2024-12-06 13:56:29.419247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:30.231 [2024-12-06 13:56:29.423098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.231 [2024-12-06 13:56:29.423142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.231 [2024-12-06 13:56:29.423169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:30.231 [2024-12-06 13:56:29.427161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.231 [2024-12-06 13:56:29.427206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.231 [2024-12-06 13:56:29.427218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:30.231 [2024-12-06 13:56:29.431246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.231 [2024-12-06 13:56:29.431286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.231 [2024-12-06 13:56:29.431298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:30.231 [2024-12-06 13:56:29.435196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.231 [2024-12-06 13:56:29.435238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.231 [2024-12-06 13:56:29.435249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:30.231 [2024-12-06 13:56:29.439073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.231 [2024-12-06 13:56:29.439114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.231 [2024-12-06 13:56:29.439127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:30.231 [2024-12-06 13:56:29.443051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.231 [2024-12-06 13:56:29.443087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.231 [2024-12-06 13:56:29.443111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:30.231 [2024-12-06 13:56:29.447038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.231 [2024-12-06 13:56:29.447068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.231 [2024-12-06 13:56:29.447096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:30.231 [2024-12-06 13:56:29.451054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.231 [2024-12-06 13:56:29.451085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.231 [2024-12-06 13:56:29.451096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:30.231 [2024-12-06 13:56:29.455126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.231 [2024-12-06 13:56:29.455166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.231 [2024-12-06 13:56:29.455178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:30.231 [2024-12-06 13:56:29.458986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.231 [2024-12-06 13:56:29.459017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.231 [2024-12-06 13:56:29.459027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:30.231 [2024-12-06 13:56:29.462836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.231 [2024-12-06 13:56:29.462868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.231 [2024-12-06 13:56:29.462878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:30.231 [2024-12-06 13:56:29.466878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.231 [2024-12-06 13:56:29.466910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.231 [2024-12-06 13:56:29.466921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:30.231 [2024-12-06 13:56:29.470886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.231 [2024-12-06 13:56:29.470935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.231 [2024-12-06 13:56:29.470947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:30.231 [2024-12-06 13:56:29.474928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.231 [2024-12-06 13:56:29.474975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.231 [2024-12-06 13:56:29.475003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:30.231 [2024-12-06 13:56:29.478948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.231 [2024-12-06 13:56:29.478978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.231 [2024-12-06 13:56:29.478989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:30.231 [2024-12-06 13:56:29.482955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.231 [2024-12-06 13:56:29.482985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.231 [2024-12-06 13:56:29.482996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:30.231 [2024-12-06 13:56:29.486921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.231 [2024-12-06 13:56:29.486952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.231 [2024-12-06 13:56:29.486963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:30.231 [2024-12-06 13:56:29.490868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.231 [2024-12-06 13:56:29.490901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.231 [2024-12-06 13:56:29.490949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:30.231 [2024-12-06 13:56:29.494894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.231 [2024-12-06 13:56:29.494925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.231 [2024-12-06 13:56:29.494935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:30.231 [2024-12-06 13:56:29.498924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.232 [2024-12-06 13:56:29.498955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.232 [2024-12-06 13:56:29.498966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:30.232 [2024-12-06 13:56:29.502999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.232 [2024-12-06 13:56:29.503030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.232 [2024-12-06 13:56:29.503040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:30.232 [2024-12-06 13:56:29.506864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.232 [2024-12-06 13:56:29.506894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.232 [2024-12-06 13:56:29.506905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:30.232 [2024-12-06 13:56:29.510922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.232 [2024-12-06 13:56:29.510953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.232 [2024-12-06 13:56:29.510965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:30.232 [2024-12-06 13:56:29.515113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.232 [2024-12-06 13:56:29.515153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.232 [2024-12-06 13:56:29.515165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:30.232 [2024-12-06 13:56:29.519115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.232 [2024-12-06 13:56:29.519155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.232 [2024-12-06 13:56:29.519166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:30.232 [2024-12-06 13:56:29.523069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.232 [2024-12-06 13:56:29.523112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.232 [2024-12-06 13:56:29.523125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:30.232 [2024-12-06 13:56:29.526821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.232 [2024-12-06 13:56:29.526853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.232 [2024-12-06 13:56:29.526865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:30.232 [2024-12-06 13:56:29.530807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.232 [2024-12-06 13:56:29.530837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.232 [2024-12-06 13:56:29.530848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:30.232 [2024-12-06 13:56:29.535062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.232 [2024-12-06 13:56:29.535095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.232 [2024-12-06 13:56:29.535122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:30.232 [2024-12-06 13:56:29.539110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.232 [2024-12-06 13:56:29.539153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.232 [2024-12-06 13:56:29.539165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:30.232 [2024-12-06 13:56:29.543125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.232 [2024-12-06 13:56:29.543154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.232 [2024-12-06 13:56:29.543165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:30.232 [2024-12-06 13:56:29.546983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.232 [2024-12-06 13:56:29.547014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.232 [2024-12-06 13:56:29.547025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:30.232 [2024-12-06 13:56:29.551005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.232 [2024-12-06 13:56:29.551036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.232 [2024-12-06 13:56:29.551047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:30.232 [2024-12-06 13:56:29.555173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.232 [2024-12-06 13:56:29.555236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.232 [2024-12-06 13:56:29.555264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:30.232 [2024-12-06 13:56:29.559251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.232 [2024-12-06 13:56:29.559281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.232 [2024-12-06 13:56:29.559291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:30.232 [2024-12-06 13:56:29.563245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.232 [2024-12-06 13:56:29.563275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.232 [2024-12-06 13:56:29.563290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:30.232 [2024-12-06 13:56:29.567138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.232 [2024-12-06 13:56:29.567178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.232 [2024-12-06 13:56:29.567189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:30.232 [2024-12-06 13:56:29.571044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.232 [2024-12-06 13:56:29.571074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.232 [2024-12-06 13:56:29.571085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:30.232 [2024-12-06 13:56:29.574799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.232 [2024-12-06 13:56:29.574829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.232 [2024-12-06 13:56:29.574840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:30.232 [2024-12-06 13:56:29.578867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.232 [2024-12-06 13:56:29.578899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.232 [2024-12-06 13:56:29.578910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:30.232 [2024-12-06 13:56:29.583150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.232 [2024-12-06 13:56:29.583195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.232 [2024-12-06 13:56:29.583209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:30.232 [2024-12-06 13:56:29.587261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.232 [2024-12-06 13:56:29.587293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.232 [2024-12-06 13:56:29.587304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:30.232 [2024-12-06 13:56:29.591181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.232 [2024-12-06 13:56:29.591212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.232 [2024-12-06 13:56:29.591223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:30.232 [2024-12-06 13:56:29.594984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.232 [2024-12-06 13:56:29.595014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.232 [2024-12-06 13:56:29.595025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:30.232 [2024-12-06 13:56:29.599053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.232 [2024-12-06 13:56:29.599084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.232 [2024-12-06 13:56:29.599095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:30.232 [2024-12-06 13:56:29.603028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.232 [2024-12-06 13:56:29.603059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.232 [2024-12-06 13:56:29.603069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:30.232 [2024-12-06 13:56:29.606929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.233 [2024-12-06 13:56:29.606959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.233 [2024-12-06 13:56:29.606970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:30.233 [2024-12-06 13:56:29.610804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.233 [2024-12-06 13:56:29.610835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.233 [2024-12-06 13:56:29.610845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:30.233 [2024-12-06 13:56:29.614887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.233 [2024-12-06 13:56:29.614919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.233 [2024-12-06 13:56:29.614929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:30.233 [2024-12-06 13:56:29.618921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.233 [2024-12-06 13:56:29.618952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.233 [2024-12-06 13:56:29.618962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:30.233 [2024-12-06 13:56:29.622966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.233 [2024-12-06 13:56:29.622996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.233 [2024-12-06 13:56:29.623007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:30.233 [2024-12-06 13:56:29.626933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.233 [2024-12-06 13:56:29.626977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.233 [2024-12-06 13:56:29.626988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:30.233 [2024-12-06 13:56:29.630701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.233 [2024-12-06 13:56:29.630731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.233 [2024-12-06 13:56:29.630742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:30.495 [2024-12-06 13:56:29.634667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.495 [2024-12-06 13:56:29.634698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.495 [2024-12-06 13:56:29.634709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:30.495 [2024-12-06 13:56:29.638698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.495 [2024-12-06 13:56:29.638729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.495 [2024-12-06 13:56:29.638740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:30.495 [2024-12-06 13:56:29.642647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.495 [2024-12-06 13:56:29.642681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.495 [2024-12-06 13:56:29.642707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:30.495 [2024-12-06 13:56:29.646628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.495 [2024-12-06 13:56:29.646669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.495 [2024-12-06 13:56:29.646680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:30.495 [2024-12-06 13:56:29.650408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.495 [2024-12-06 13:56:29.650439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.495 [2024-12-06 13:56:29.650450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:30.495 [2024-12-06 13:56:29.654237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.495 [2024-12-06 13:56:29.654267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.495 [2024-12-06 13:56:29.654278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:30.495 [2024-12-06 13:56:29.658236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.495 [2024-12-06 13:56:29.658279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.495 [2024-12-06 13:56:29.658292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:30.495 [2024-12-06 13:56:29.662253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.495 [2024-12-06 13:56:29.662284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.495 [2024-12-06 13:56:29.662295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:30.495 [2024-12-06 13:56:29.666196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.495 [2024-12-06 13:56:29.666226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.495 [2024-12-06 13:56:29.666237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:30.495 [2024-12-06 13:56:29.669917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.495 [2024-12-06 13:56:29.669948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.495 [2024-12-06 13:56:29.669959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:30.495 [2024-12-06 13:56:29.673818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.495 [2024-12-06 13:56:29.673850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.495 [2024-12-06 13:56:29.673860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:30.495 [2024-12-06 13:56:29.677888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.495 [2024-12-06 13:56:29.677920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.495 [2024-12-06 13:56:29.677930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:30.495 [2024-12-06 13:56:29.681897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.495 [2024-12-06 13:56:29.681928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.495 [2024-12-06 13:56:29.681940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:30.495 [2024-12-06 13:56:29.686315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.495 [2024-12-06 13:56:29.686367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.495 [2024-12-06 13:56:29.686380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:30.495 [2024-12-06 13:56:29.690832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.495 [2024-12-06 13:56:29.690865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.495 [2024-12-06 13:56:29.690877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:30.495 [2024-12-06 13:56:29.695123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.495 [2024-12-06 13:56:29.695179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.495 [2024-12-06 13:56:29.695191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:30.495 [2024-12-06 13:56:29.699886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.495 [2024-12-06 13:56:29.699921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.495 [2024-12-06 13:56:29.699932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:30.495 [2024-12-06 13:56:29.704655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.495 [2024-12-06 13:56:29.704686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.495 [2024-12-06 13:56:29.704697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:30.495 [2024-12-06 13:56:29.709025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.495 [2024-12-06 13:56:29.709057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.496 [2024-12-06 13:56:29.709067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:30.496 [2024-12-06 13:56:29.713319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.496 [2024-12-06 13:56:29.713353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.496 [2024-12-06 13:56:29.713365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:30.496 [2024-12-06 13:56:29.717637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.496 [2024-12-06 13:56:29.717669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.496 [2024-12-06 13:56:29.717681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:30.496 [2024-12-06 13:56:29.721903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.496 [2024-12-06 13:56:29.721936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.496 [2024-12-06 13:56:29.721962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:30.496 [2024-12-06 13:56:29.726427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.496 [2024-12-06 13:56:29.726460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.496 [2024-12-06 13:56:29.726471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:30.496 [2024-12-06 13:56:29.730778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.496 [2024-12-06 13:56:29.730810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.496 [2024-12-06 13:56:29.730820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:30.496 [2024-12-06 13:56:29.734704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.496 [2024-12-06 13:56:29.734735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.496 [2024-12-06 13:56:29.734745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:30.496 [2024-12-06 13:56:29.738598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.496 [2024-12-06 13:56:29.738629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.496 [2024-12-06 13:56:29.738639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:30.496 [2024-12-06 13:56:29.742578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.496 [2024-12-06 13:56:29.742610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.496 [2024-12-06 13:56:29.742621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:30.496 [2024-12-06 13:56:29.746685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.496 [2024-12-06 13:56:29.746717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.496 [2024-12-06 13:56:29.746728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:30.496 [2024-12-06 13:56:29.750636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.496 [2024-12-06 13:56:29.750677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.496 [2024-12-06 13:56:29.750689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:30.496 [2024-12-06 13:56:29.754458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.496 [2024-12-06 13:56:29.754488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.496 [2024-12-06 13:56:29.754499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:30.496 [2024-12-06 13:56:29.758416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.496 [2024-12-06 13:56:29.758446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.496 [2024-12-06 13:56:29.758457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:30.496 [2024-12-06 13:56:29.762546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.496 [2024-12-06 13:56:29.762577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.496 [2024-12-06 13:56:29.762588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:30.496 [2024-12-06 13:56:29.766761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.496 [2024-12-06 13:56:29.766793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.496 [2024-12-06 13:56:29.766804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:30.496 [2024-12-06 13:56:29.770713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.496 [2024-12-06 13:56:29.770745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.496 [2024-12-06 13:56:29.770755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:30.496 [2024-12-06 13:56:29.774641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.496 [2024-12-06 13:56:29.774672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.496 [2024-12-06 13:56:29.774683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:30.496 [2024-12-06 13:56:29.778694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.496 [2024-12-06 13:56:29.778724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.496 [2024-12-06 13:56:29.778734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:30.496 [2024-12-06 13:56:29.782717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.496 [2024-12-06 13:56:29.782748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.496 [2024-12-06 13:56:29.782759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:30.496 [2024-12-06 13:56:29.786866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.496 [2024-12-06 13:56:29.786897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.496 [2024-12-06 13:56:29.786908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:30.496 [2024-12-06 13:56:29.790956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.497 [2024-12-06 13:56:29.790987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.497 [2024-12-06 13:56:29.790998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:30.497 [2024-12-06 13:56:29.794753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.497 [2024-12-06 13:56:29.794785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.497 [2024-12-06 13:56:29.794796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:30.497 [2024-12-06 13:56:29.798731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.497 [2024-12-06 13:56:29.798765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.497 [2024-12-06 13:56:29.798777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:30.497 [2024-12-06 13:56:29.802845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.497 [2024-12-06 13:56:29.802880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.497 [2024-12-06 13:56:29.802908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:30.497 [2024-12-06 13:56:29.806986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.497 [2024-12-06 13:56:29.807017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.497 [2024-12-06 13:56:29.807028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:30.497 [2024-12-06 13:56:29.811128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.497 [2024-12-06 13:56:29.811169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.497 [2024-12-06 13:56:29.811180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:30.497 [2024-12-06 13:56:29.815056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.497 [2024-12-06 13:56:29.815087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.497 [2024-12-06 13:56:29.815107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:30.497 [2024-12-06 13:56:29.819012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.497 [2024-12-06 13:56:29.819043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.497 [2024-12-06 13:56:29.819054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:30.497 [2024-12-06 13:56:29.823033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.497 [2024-12-06 13:56:29.823064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.497 [2024-12-06 13:56:29.823074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:30.497 [2024-12-06 13:56:29.827078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.497 [2024-12-06 13:56:29.827119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.497 [2024-12-06 13:56:29.827130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:30.497 [2024-12-06 13:56:29.831271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.497 [2024-12-06 13:56:29.831301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.497 [2024-12-06 13:56:29.831312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:30.497 [2024-12-06 13:56:29.835145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.497 [2024-12-06 13:56:29.835175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.497 [2024-12-06 13:56:29.835185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:30.497 [2024-12-06 13:56:29.839152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.497 [2024-12-06 13:56:29.839193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.497 [2024-12-06 13:56:29.839204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:30.497 [2024-12-06 13:56:29.843144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.497 [2024-12-06 13:56:29.843183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.497 [2024-12-06 13:56:29.843195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:30.497 [2024-12-06 13:56:29.847296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.497 [2024-12-06 13:56:29.847327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.497 [2024-12-06 13:56:29.847364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:30.497 [2024-12-06 13:56:29.851942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.497 [2024-12-06 13:56:29.851974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.497 [2024-12-06 13:56:29.851986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:30.497 [2024-12-06 13:56:29.856308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.497 [2024-12-06 13:56:29.856341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.497 [2024-12-06 13:56:29.856352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:30.497 [2024-12-06 13:56:29.860747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.497 [2024-12-06 13:56:29.860779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.497 [2024-12-06 13:56:29.860790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:30.497 [2024-12-06 13:56:29.865416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.497 [2024-12-06 13:56:29.865458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.497 [2024-12-06 13:56:29.865470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:30.497 [2024-12-06 13:56:29.870035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.497 [2024-12-06 13:56:29.870102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.497 [2024-12-06 13:56:29.870142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:30.497 [2024-12-06 13:56:29.874743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.497 [2024-12-06 13:56:29.874778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.497 [2024-12-06 13:56:29.874789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:30.497 [2024-12-06 13:56:29.879517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.497 [2024-12-06 13:56:29.879562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.497 [2024-12-06 13:56:29.879576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:30.497 [2024-12-06 13:56:29.884398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.498 [2024-12-06 13:56:29.884431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.498 [2024-12-06 13:56:29.884443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:30.498 [2024-12-06 13:56:29.889220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.498 [2024-12-06 13:56:29.889263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.498 [2024-12-06 13:56:29.889276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:30.498 [2024-12-06 13:56:29.893771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.498 [2024-12-06 13:56:29.893806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.498 [2024-12-06 13:56:29.893818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:30.759 [2024-12-06 13:56:29.898361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.759 [2024-12-06 13:56:29.898393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.759 [2024-12-06 13:56:29.898405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:30.759 [2024-12-06 13:56:29.902750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.759 [2024-12-06 13:56:29.902785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.759 [2024-12-06 13:56:29.902797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:30.759 [2024-12-06 13:56:29.907005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.759 [2024-12-06 13:56:29.907038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.759 [2024-12-06 13:56:29.907049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:30.759 [2024-12-06 13:56:29.911676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.759 [2024-12-06 13:56:29.911709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.759 [2024-12-06 13:56:29.911720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:30.759 [2024-12-06 13:56:29.915809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.759 [2024-12-06 13:56:29.915843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.759 [2024-12-06 13:56:29.915853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:30.759 [2024-12-06 13:56:29.920110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.759 [2024-12-06 13:56:29.920152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.759 [2024-12-06 13:56:29.920164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:30.759 [2024-12-06 13:56:29.924031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.759 [2024-12-06 13:56:29.924073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.759 [2024-12-06 13:56:29.924084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:30.759 [2024-12-06 13:56:29.928213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.759 [2024-12-06 13:56:29.928294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.759 [2024-12-06 13:56:29.928307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:30.759 [2024-12-06 13:56:29.932349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.759 [2024-12-06 13:56:29.932381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.759 [2024-12-06 13:56:29.932392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:30.759 [2024-12-06 13:56:29.936388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.759 [2024-12-06 13:56:29.936419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.759 [2024-12-06 13:56:29.936430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:30.759 [2024-12-06 13:56:29.940578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.759 [2024-12-06 13:56:29.940610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.759 [2024-12-06 13:56:29.940621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:30.759 [2024-12-06 13:56:29.944781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.759 [2024-12-06 13:56:29.944835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.759 [2024-12-06 13:56:29.944852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:30.759 [2024-12-06 13:56:29.949073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.759 [2024-12-06 13:56:29.949118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.759 [2024-12-06 13:56:29.949131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:30.759 [2024-12-06 13:56:29.953254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.759 [2024-12-06 13:56:29.953297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.759 [2024-12-06 13:56:29.953308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:30.759 [2024-12-06 13:56:29.957365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.759 [2024-12-06 13:56:29.957407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.759 [2024-12-06 13:56:29.957419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:30.759 [2024-12-06 13:56:29.961696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.759 [2024-12-06 13:56:29.961728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.759 [2024-12-06 13:56:29.961739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:30.759 [2024-12-06 13:56:29.965759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.759 [2024-12-06 13:56:29.965795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.759 [2024-12-06 13:56:29.965807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:30.759 [2024-12-06 13:56:29.970021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.759 [2024-12-06 13:56:29.970053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.759 [2024-12-06 13:56:29.970065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:30.759 [2024-12-06 13:56:29.974201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.759 [2024-12-06 13:56:29.974231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.759 [2024-12-06 13:56:29.974242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:30.759 [2024-12-06 13:56:29.978458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.760 [2024-12-06 13:56:29.978490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.760 [2024-12-06 13:56:29.978502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:30.760 [2024-12-06 13:56:29.982427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.760 [2024-12-06 13:56:29.982458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.760 [2024-12-06 13:56:29.982469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:30.760 [2024-12-06 13:56:29.986429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.760 [2024-12-06 13:56:29.986460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.760 [2024-12-06 13:56:29.986472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:30.760 [2024-12-06 13:56:29.990737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.760 [2024-12-06 13:56:29.990769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.760 [2024-12-06 13:56:29.990780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:30.760 [2024-12-06 13:56:29.995030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.760 [2024-12-06 13:56:29.995063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.760 [2024-12-06 13:56:29.995074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:30.760 [2024-12-06 13:56:29.999105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.760 [2024-12-06 13:56:29.999181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.760 [2024-12-06 13:56:29.999193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:30.760 [2024-12-06 13:56:30.003424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.760 [2024-12-06 13:56:30.003456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.760 [2024-12-06 13:56:30.003467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:30.760 [2024-12-06 13:56:30.007525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.760 [2024-12-06 13:56:30.007558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.760 [2024-12-06 13:56:30.007569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:30.760 [2024-12-06 13:56:30.011695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.760 [2024-12-06 13:56:30.011727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.760 [2024-12-06 13:56:30.011738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:30.760 [2024-12-06 13:56:30.016057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.760 [2024-12-06 13:56:30.016089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.760 [2024-12-06 13:56:30.016111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:30.760 [2024-12-06 13:56:30.020132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.760 [2024-12-06 13:56:30.020175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.760 [2024-12-06 13:56:30.020187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:30.760 [2024-12-06 13:56:30.024168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.760 [2024-12-06 13:56:30.024211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.760 [2024-12-06 13:56:30.024224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:30.760 [2024-12-06 13:56:30.028313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.760 [2024-12-06 13:56:30.028345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.760 [2024-12-06 13:56:30.028357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:30.760 [2024-12-06 13:56:30.032537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.760 [2024-12-06 13:56:30.032570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.760 [2024-12-06 13:56:30.032581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:30.760 [2024-12-06 13:56:30.036814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.760 [2024-12-06 13:56:30.036846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.760 [2024-12-06 13:56:30.036856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:30.760 [2024-12-06 13:56:30.041155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.760 [2024-12-06 13:56:30.041216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.760 [2024-12-06 13:56:30.041228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:30.760 [2024-12-06 13:56:30.045202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.760 [2024-12-06 13:56:30.045244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.760 [2024-12-06 13:56:30.045255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:30.760 [2024-12-06 13:56:30.049021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.760 [2024-12-06 13:56:30.049052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.760 [2024-12-06 13:56:30.049062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:30.760 [2024-12-06 13:56:30.053149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.760 [2024-12-06 13:56:30.053190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.760 [2024-12-06 13:56:30.053201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:30.760 [2024-12-06 13:56:30.057157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.760 [2024-12-06 13:56:30.057200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.760 [2024-12-06 13:56:30.057212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:30.760 [2024-12-06 13:56:30.061272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.760 [2024-12-06 13:56:30.061303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.760 [2024-12-06 13:56:30.061314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:30.760 [2024-12-06 13:56:30.065176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.760 [2024-12-06 13:56:30.065208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.760 [2024-12-06 13:56:30.065219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:30.760 [2024-12-06 13:56:30.069105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.760 [2024-12-06 13:56:30.069163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.760 [2024-12-06 13:56:30.069176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:30.760 [2024-12-06 13:56:30.073171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.760 [2024-12-06 13:56:30.073202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.760 [2024-12-06 13:56:30.073213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:30.760 [2024-12-06 13:56:30.077282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.760 [2024-12-06 13:56:30.077315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.760 [2024-12-06 13:56:30.077326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:30.760 [2024-12-06 13:56:30.081256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.760 [2024-12-06 13:56:30.081288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.760 [2024-12-06 13:56:30.081299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:30.760 [2024-12-06 13:56:30.085183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.760 [2024-12-06 13:56:30.085214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.761 [2024-12-06 13:56:30.085224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:30.761 [2024-12-06 13:56:30.089005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.761 [2024-12-06 13:56:30.089037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.761 [2024-12-06 13:56:30.089048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:30.761 [2024-12-06 13:56:30.092925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.761 [2024-12-06 13:56:30.092956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.761 [2024-12-06 13:56:30.092966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:30.761 [2024-12-06 13:56:30.096907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.761 [2024-12-06 13:56:30.096938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.761 [2024-12-06 13:56:30.096948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:30.761 [2024-12-06 13:56:30.100975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.761 [2024-12-06 13:56:30.101006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.761 [2024-12-06 13:56:30.101017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:30.761 [2024-12-06 13:56:30.104913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.761 [2024-12-06 13:56:30.104946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.761 [2024-12-06 13:56:30.104957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:30.761 [2024-12-06 13:56:30.108761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.761 [2024-12-06 13:56:30.108792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.761 [2024-12-06 13:56:30.108802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:30.761 [2024-12-06 13:56:30.112756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.761 [2024-12-06 13:56:30.112786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.761 [2024-12-06 13:56:30.112798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:30.761 [2024-12-06 13:56:30.116773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.761 [2024-12-06 13:56:30.116804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.761 [2024-12-06 13:56:30.116815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:30.761 [2024-12-06 13:56:30.120811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.761 [2024-12-06 13:56:30.120843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.761 [2024-12-06 13:56:30.120853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:30.761 [2024-12-06 13:56:30.124742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.761 [2024-12-06 13:56:30.124773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.761 [2024-12-06 13:56:30.124784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:30.761 [2024-12-06 13:56:30.128454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.761 [2024-12-06 13:56:30.128485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.761 [2024-12-06 13:56:30.128495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:30.761 [2024-12-06 13:56:30.132241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.761 [2024-12-06 13:56:30.132271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.761 [2024-12-06 13:56:30.132281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:30.761 [2024-12-06 13:56:30.136182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.761 [2024-12-06 13:56:30.136212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.761 [2024-12-06 13:56:30.136223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:30.761 [2024-12-06 13:56:30.140238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.761 [2024-12-06 13:56:30.140268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.761 [2024-12-06 13:56:30.140278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:30.761 [2024-12-06 13:56:30.144203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.761 [2024-12-06 13:56:30.144232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.761 [2024-12-06 13:56:30.144242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:30.761 [2024-12-06 13:56:30.148089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.761 [2024-12-06 13:56:30.148128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.761 [2024-12-06 13:56:30.148139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:30.761 [2024-12-06 13:56:30.151917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.761 [2024-12-06 13:56:30.151948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.761 [2024-12-06 13:56:30.151958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:30.761 [2024-12-06 13:56:30.155940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:30.761 [2024-12-06 13:56:30.155983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.761 [2024-12-06 13:56:30.155995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.023 [2024-12-06 13:56:30.159988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.023 [2024-12-06 13:56:30.160019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.023 [2024-12-06 13:56:30.160030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:31.023 [2024-12-06 13:56:30.164050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.023 [2024-12-06 13:56:30.164082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.023 [2024-12-06 13:56:30.164092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:31.023 [2024-12-06 13:56:30.167985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.023 [2024-12-06 13:56:30.168016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.023 [2024-12-06 13:56:30.168027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:31.023 [2024-12-06 13:56:30.171907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.023 [2024-12-06 13:56:30.171938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.023 [2024-12-06 13:56:30.171948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.023 [2024-12-06 13:56:30.175930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.023 [2024-12-06 13:56:30.175963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.023 [2024-12-06 13:56:30.175990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:31.023 [2024-12-06 13:56:30.180026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.023 [2024-12-06 13:56:30.180058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.023 [2024-12-06 13:56:30.180069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:31.023 [2024-12-06 13:56:30.184045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.023 [2024-12-06 13:56:30.184079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.023 [2024-12-06 13:56:30.184090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:31.023 [2024-12-06 13:56:30.187954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.023 [2024-12-06 13:56:30.188002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.023 [2024-12-06 13:56:30.188022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.023 [2024-12-06 13:56:30.191887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.023 [2024-12-06 13:56:30.191918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.023 [2024-12-06 13:56:30.191928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:31.023 [2024-12-06 13:56:30.195913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.023 [2024-12-06 13:56:30.195944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.023 [2024-12-06 13:56:30.195955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:31.023 [2024-12-06 13:56:30.199981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.023 [2024-12-06 13:56:30.200012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.023 [2024-12-06 13:56:30.200023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:31.023 [2024-12-06 13:56:30.204014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.023 [2024-12-06 13:56:30.204045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.023 [2024-12-06 13:56:30.204056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.023 [2024-12-06 13:56:30.207990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.023 [2024-12-06 13:56:30.208021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.023 [2024-12-06 13:56:30.208032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:31.023 [2024-12-06 13:56:30.211835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.023 [2024-12-06 13:56:30.211867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.023 [2024-12-06 13:56:30.211878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:31.023 [2024-12-06 13:56:30.215929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.023 [2024-12-06 13:56:30.215960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.023 [2024-12-06 13:56:30.215970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:31.023 [2024-12-06 13:56:30.219926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.023 [2024-12-06 13:56:30.219957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.023 [2024-12-06 13:56:30.219968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.023 [2024-12-06 13:56:30.224062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.023 [2024-12-06 13:56:30.224093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.023 [2024-12-06 13:56:30.224115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:31.023 [2024-12-06 13:56:30.227919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.023 [2024-12-06 13:56:30.227952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.023 [2024-12-06 13:56:30.227963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:31.023 [2024-12-06 13:56:30.231821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.023 [2024-12-06 13:56:30.231852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.023 [2024-12-06 13:56:30.231862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:31.023 [2024-12-06 13:56:30.235835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.023 [2024-12-06 13:56:30.235865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.023 [2024-12-06 13:56:30.235876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.023 [2024-12-06 13:56:30.239547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.023 [2024-12-06 13:56:30.239579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.023 [2024-12-06 13:56:30.239590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:31.023 [2024-12-06 13:56:30.243751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.023 [2024-12-06 13:56:30.243785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.023 [2024-12-06 13:56:30.243827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:31.023 [2024-12-06 13:56:30.248276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.023 [2024-12-06 13:56:30.248307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.024 [2024-12-06 13:56:30.248318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:31.024 [2024-12-06 13:56:30.252481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.024 [2024-12-06 13:56:30.252513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.024 [2024-12-06 13:56:30.252523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.024 [2024-12-06 13:56:30.256584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.024 [2024-12-06 13:56:30.256617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.024 [2024-12-06 13:56:30.256628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:31.024 7595.00 IOPS, 949.38 MiB/s [2024-12-06T13:56:30.428Z] [2024-12-06 13:56:30.262159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.024 [2024-12-06 13:56:30.262200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.024 [2024-12-06 13:56:30.262212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:31.024 [2024-12-06 13:56:30.266924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.024 [2024-12-06 13:56:30.266970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.024 [2024-12-06 13:56:30.266983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:31.024 [2024-12-06 13:56:30.271632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.024 [2024-12-06 13:56:30.271699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.024 [2024-12-06 13:56:30.271726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.024 [2024-12-06 13:56:30.276128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.024 [2024-12-06 13:56:30.276168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.024 [2024-12-06 13:56:30.276180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:31.024 [2024-12-06 13:56:30.280415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.024 [2024-12-06 13:56:30.280446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.024 [2024-12-06 13:56:30.280457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:31.024 [2024-12-06 13:56:30.284917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.024 [2024-12-06 13:56:30.284968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.024 [2024-12-06 13:56:30.284981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:31.024 [2024-12-06 13:56:30.289570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.024 [2024-12-06 13:56:30.289602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.024 [2024-12-06 13:56:30.289613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.024 [2024-12-06 13:56:30.293890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.024 [2024-12-06 13:56:30.293923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.024 [2024-12-06 13:56:30.293934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:31.024 [2024-12-06 13:56:30.298016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.024 [2024-12-06 13:56:30.298048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.024 [2024-12-06 13:56:30.298059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:31.024 [2024-12-06 13:56:30.302143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.024 [2024-12-06 13:56:30.302186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.024 [2024-12-06 13:56:30.302198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:31.024 [2024-12-06 13:56:30.306205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.024 [2024-12-06 13:56:30.306235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.024 [2024-12-06 13:56:30.306246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.024 [2024-12-06 13:56:30.310491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.024 [2024-12-06 13:56:30.310525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.024 [2024-12-06 13:56:30.310536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:31.024 [2024-12-06 13:56:30.314793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.024 [2024-12-06 13:56:30.314825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.024 [2024-12-06 13:56:30.314836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:31.024 [2024-12-06 13:56:30.318913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.024 [2024-12-06 13:56:30.318946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.024 [2024-12-06 13:56:30.318957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:31.024 [2024-12-06 13:56:30.323124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.024 [2024-12-06 13:56:30.323165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.024 [2024-12-06 13:56:30.323177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.024 [2024-12-06 13:56:30.327514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.024 [2024-12-06 13:56:30.327557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.024 [2024-12-06 13:56:30.327570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:31.024 [2024-12-06 13:56:30.331875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.024 [2024-12-06 13:56:30.331907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.024 [2024-12-06 13:56:30.331919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:31.024 [2024-12-06 13:56:30.336053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.024 [2024-12-06 13:56:30.336085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.024 [2024-12-06 13:56:30.336107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:31.024 [2024-12-06 13:56:30.340109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.024 [2024-12-06 13:56:30.340152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.024 [2024-12-06 13:56:30.340165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.024 [2024-12-06 13:56:30.344536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.024 [2024-12-06 13:56:30.344568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.024 [2024-12-06 13:56:30.344579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:31.024 [2024-12-06 13:56:30.348501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.024 [2024-12-06 13:56:30.348533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.024 [2024-12-06 13:56:30.348544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:31.024 [2024-12-06 13:56:30.352495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.024 [2024-12-06 13:56:30.352527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.024 [2024-12-06 13:56:30.352539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:31.024 [2024-12-06 13:56:30.356562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.024 [2024-12-06 13:56:30.356595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.024 [2024-12-06 13:56:30.356606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.024 [2024-12-06 13:56:30.360587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.024 [2024-12-06 13:56:30.360620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.024 [2024-12-06 13:56:30.360631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:31.024 [2024-12-06 13:56:30.364727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.025 [2024-12-06 13:56:30.364759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.025 [2024-12-06 13:56:30.364770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:31.025 [2024-12-06 13:56:30.369020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.025 [2024-12-06 13:56:30.369052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.025 [2024-12-06 13:56:30.369063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:31.025 [2024-12-06 13:56:30.373345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.025 [2024-12-06 13:56:30.373376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.025 [2024-12-06 13:56:30.373387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.025 [2024-12-06 13:56:30.377592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.025 [2024-12-06 13:56:30.377624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.025 [2024-12-06 13:56:30.377635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:31.025 [2024-12-06 13:56:30.381587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.025 [2024-12-06 13:56:30.381619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.025 [2024-12-06 13:56:30.381630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:31.025 [2024-12-06 13:56:30.385778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.025 [2024-12-06 13:56:30.385809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.025 [2024-12-06 13:56:30.385820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:31.025 [2024-12-06 13:56:30.390042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.025 [2024-12-06 13:56:30.390074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.025 [2024-12-06 13:56:30.390086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.025 [2024-12-06 13:56:30.394333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.025 [2024-12-06 13:56:30.394397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.025 [2024-12-06 13:56:30.394409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:31.025 [2024-12-06 13:56:30.398405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.025 [2024-12-06 13:56:30.398465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.025 [2024-12-06 13:56:30.398477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:31.025 [2024-12-06 13:56:30.402375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.025 [2024-12-06 13:56:30.402407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.025 [2024-12-06 13:56:30.402418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:31.025 [2024-12-06 13:56:30.406461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.025 [2024-12-06 13:56:30.406494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.025 [2024-12-06 13:56:30.406505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.025 [2024-12-06 13:56:30.410929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.025 [2024-12-06 13:56:30.410965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.025 [2024-12-06 13:56:30.410979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:31.025 [2024-12-06 13:56:30.415397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.025 [2024-12-06 13:56:30.415429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.025 [2024-12-06 13:56:30.415440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:31.025 [2024-12-06 13:56:30.419496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.025 [2024-12-06 13:56:30.419540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.025 [2024-12-06 13:56:30.419551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:31.287 [2024-12-06 13:56:30.423793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.287 [2024-12-06 13:56:30.423825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.287 [2024-12-06 13:56:30.423837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.287 [2024-12-06 13:56:30.428141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.287 [2024-12-06 13:56:30.428190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.287 [2024-12-06 13:56:30.428201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:31.287 [2024-12-06 13:56:30.432467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.287 [2024-12-06 13:56:30.432498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.287 [2024-12-06 13:56:30.432509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:31.287 [2024-12-06 13:56:30.436853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.287 [2024-12-06 13:56:30.436888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.287 [2024-12-06 13:56:30.436901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:31.287 [2024-12-06 13:56:30.441171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.287 [2024-12-06 13:56:30.441212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.287 [2024-12-06 13:56:30.441224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.287 [2024-12-06 13:56:30.445388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.287 [2024-12-06 13:56:30.445420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.287 [2024-12-06 13:56:30.445432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:31.287 [2024-12-06 13:56:30.449541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.287 [2024-12-06 13:56:30.449571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.287 [2024-12-06 13:56:30.449582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:31.287 [2024-12-06 13:56:30.453712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.287 [2024-12-06 13:56:30.453743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.287 [2024-12-06 13:56:30.453753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:31.287 [2024-12-06 13:56:30.457793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.287 [2024-12-06 13:56:30.457824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.287 [2024-12-06 13:56:30.457835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.287 [2024-12-06 13:56:30.461660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.287 [2024-12-06 13:56:30.461691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.287 [2024-12-06 13:56:30.461702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:31.287 [2024-12-06 13:56:30.465565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.287 [2024-12-06 13:56:30.465598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.287 [2024-12-06 13:56:30.465608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:31.287 [2024-12-06 13:56:30.469671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.287 [2024-12-06 13:56:30.469703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.287 [2024-12-06 13:56:30.469714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:31.287 [2024-12-06 13:56:30.473791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.287 [2024-12-06 13:56:30.473840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.287 [2024-12-06 13:56:30.473851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.287 [2024-12-06 13:56:30.477905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.287 [2024-12-06 13:56:30.477940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.287 [2024-12-06 13:56:30.477952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:31.287 [2024-12-06 13:56:30.481894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.287 [2024-12-06 13:56:30.481926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.287 [2024-12-06 13:56:30.481938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:31.287 [2024-12-06 13:56:30.485792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.287 [2024-12-06 13:56:30.485823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.287 [2024-12-06 13:56:30.485834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:31.287 [2024-12-06 13:56:30.489754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.287 [2024-12-06 13:56:30.489785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.287 [2024-12-06 13:56:30.489797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.287 [2024-12-06 13:56:30.493890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.287 [2024-12-06 13:56:30.493937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.287 [2024-12-06 13:56:30.493981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:31.287 [2024-12-06 13:56:30.498113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.287 [2024-12-06 13:56:30.498155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.287 [2024-12-06 13:56:30.498169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:31.287 [2024-12-06 13:56:30.502045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.287 [2024-12-06 13:56:30.502077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.288 [2024-12-06 13:56:30.502088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:31.288 [2024-12-06 13:56:30.505878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.288 [2024-12-06 13:56:30.505919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.288 [2024-12-06 13:56:30.505930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.288 [2024-12-06 13:56:30.509813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.288 [2024-12-06 13:56:30.509843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.288 [2024-12-06 13:56:30.509853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:31.288 [2024-12-06 13:56:30.513897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.288 [2024-12-06 13:56:30.513938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.288 [2024-12-06 13:56:30.513949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:31.288 [2024-12-06 13:56:30.518047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.288 [2024-12-06 13:56:30.518077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.288 [2024-12-06 13:56:30.518088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:31.288 [2024-12-06 13:56:30.521979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.288 [2024-12-06 13:56:30.522010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.288 [2024-12-06 13:56:30.522021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.288 [2024-12-06 13:56:30.525916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.288 [2024-12-06 13:56:30.525948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.288 [2024-12-06 13:56:30.525959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:31.288 [2024-12-06 13:56:30.529835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.288 [2024-12-06 13:56:30.529884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.288 [2024-12-06 13:56:30.529895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:31.288 [2024-12-06 13:56:30.534031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.288 [2024-12-06 13:56:30.534065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.288 [2024-12-06 13:56:30.534078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:31.288 [2024-12-06 13:56:30.538289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.288 [2024-12-06 13:56:30.538320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.288 [2024-12-06 13:56:30.538331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.288 [2024-12-06 13:56:30.542212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.288 [2024-12-06 13:56:30.542243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.288 [2024-12-06 13:56:30.542254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:31.288 [2024-12-06 13:56:30.546042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.288 [2024-12-06 13:56:30.546072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.288 [2024-12-06 13:56:30.546083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:31.288 [2024-12-06 13:56:30.549942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.288 [2024-12-06 13:56:30.549972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.288 [2024-12-06 13:56:30.549983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:31.288 [2024-12-06 13:56:30.554122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.288 [2024-12-06 13:56:30.554163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.288 [2024-12-06 13:56:30.554175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.288 [2024-12-06 13:56:30.558207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.288 [2024-12-06 13:56:30.558236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.288 [2024-12-06 13:56:30.558247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:31.288 [2024-12-06 13:56:30.562183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.288 [2024-12-06 13:56:30.562212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.288 [2024-12-06 13:56:30.562222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:31.288 [2024-12-06 13:56:30.566008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.288 [2024-12-06 13:56:30.566038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.288 [2024-12-06 13:56:30.566049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:31.288 [2024-12-06 13:56:30.569874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.288 [2024-12-06 13:56:30.569904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.288 [2024-12-06 13:56:30.569915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.288 [2024-12-06 13:56:30.573693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.288 [2024-12-06 13:56:30.573723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.288 [2024-12-06 13:56:30.573734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:31.288 [2024-12-06 13:56:30.577767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.288 [2024-12-06 13:56:30.577797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.288 [2024-12-06 13:56:30.577808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:31.288 [2024-12-06 13:56:30.581872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.288 [2024-12-06 13:56:30.581903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.288 [2024-12-06 13:56:30.581913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:31.288 [2024-12-06 13:56:30.586124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.288 [2024-12-06 13:56:30.586167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.288 [2024-12-06 13:56:30.586179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.288 [2024-12-06 13:56:30.590121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.288 [2024-12-06 13:56:30.590164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.288 [2024-12-06 13:56:30.590176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:31.288 [2024-12-06 13:56:30.594259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.288 [2024-12-06 13:56:30.594302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.288 [2024-12-06 13:56:30.594314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:31.288 [2024-12-06 13:56:30.598495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.288 [2024-12-06 13:56:30.598527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.288 [2024-12-06 13:56:30.598554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:31.288 [2024-12-06 13:56:30.602783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.288 [2024-12-06 13:56:30.602816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.288 [2024-12-06 13:56:30.602828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.288 [2024-12-06 13:56:30.606954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.288 [2024-12-06 13:56:30.606986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.288 [2024-12-06 13:56:30.606996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:31.288 [2024-12-06 13:56:30.610949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.288 [2024-12-06 13:56:30.610982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.288 [2024-12-06 13:56:30.610994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:31.289 [2024-12-06 13:56:30.615275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.289 [2024-12-06 13:56:30.615305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.289 [2024-12-06 13:56:30.615317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:31.289 [2024-12-06 13:56:30.619370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.289 [2024-12-06 13:56:30.619403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.289 [2024-12-06 13:56:30.619414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.289 [2024-12-06 13:56:30.623437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.289 [2024-12-06 13:56:30.623470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.289 [2024-12-06 13:56:30.623482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:31.289 [2024-12-06 13:56:30.627410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.289 [2024-12-06 13:56:30.627442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.289 [2024-12-06 13:56:30.627453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:31.289 [2024-12-06 13:56:30.631359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.289 [2024-12-06 13:56:30.631390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.289 [2024-12-06 13:56:30.631402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:31.289 [2024-12-06 13:56:30.635474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.289 [2024-12-06 13:56:30.635505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.289 [2024-12-06 13:56:30.635517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.289 [2024-12-06 13:56:30.639704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.289 [2024-12-06 13:56:30.639735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.289 [2024-12-06 13:56:30.639746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:31.289 [2024-12-06 13:56:30.643769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.289 [2024-12-06 13:56:30.643800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.289 [2024-12-06 13:56:30.643811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:31.289 [2024-12-06 13:56:30.647709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.289 [2024-12-06 13:56:30.647740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.289 [2024-12-06 13:56:30.647750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:31.289 [2024-12-06 13:56:30.651777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.289 [2024-12-06 13:56:30.651808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.289 [2024-12-06 13:56:30.651819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.289 [2024-12-06 13:56:30.655850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.289 [2024-12-06 13:56:30.655881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.289 [2024-12-06 13:56:30.655891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:31.289 [2024-12-06 13:56:30.659955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.289 [2024-12-06 13:56:30.659986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.289 [2024-12-06 13:56:30.659997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:31.289 [2024-12-06 13:56:30.664051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.289 [2024-12-06 13:56:30.664082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.289 [2024-12-06 13:56:30.664092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:31.289 [2024-12-06 13:56:30.667964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.289 [2024-12-06 13:56:30.667994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.289 [2024-12-06 13:56:30.668006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.289 [2024-12-06 13:56:30.672018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.289 [2024-12-06 13:56:30.672049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.289 [2024-12-06 13:56:30.672060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:31.289 [2024-12-06 13:56:30.676201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.289 [2024-12-06 13:56:30.676231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.289 [2024-12-06 13:56:30.676241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:31.289 [2024-12-06 13:56:30.680079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.289 [2024-12-06 13:56:30.680119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.289 [2024-12-06 13:56:30.680130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:31.289 [2024-12-06 13:56:30.684028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.289 [2024-12-06 13:56:30.684060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.289 [2024-12-06 13:56:30.684072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.550 [2024-12-06 13:56:30.688010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.550 [2024-12-06 13:56:30.688051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.550 [2024-12-06 13:56:30.688062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:31.550 [2024-12-06 13:56:30.691963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.551 [2024-12-06 13:56:30.691994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.551 [2024-12-06 13:56:30.692004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:31.551 [2024-12-06 13:56:30.695976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.551 [2024-12-06 13:56:30.696007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.551 [2024-12-06 13:56:30.696018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:31.551 [2024-12-06 13:56:30.700166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.551 [2024-12-06 13:56:30.700205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.551 [2024-12-06 13:56:30.700217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.551 [2024-12-06 13:56:30.704300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.551 [2024-12-06 13:56:30.704333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.551 [2024-12-06 13:56:30.704345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:31.551 [2024-12-06 13:56:30.708678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.551 [2024-12-06 13:56:30.708710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.551 [2024-12-06 13:56:30.708721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:31.551 [2024-12-06 13:56:30.713036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.551 [2024-12-06 13:56:30.713067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.551 [2024-12-06 13:56:30.713079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:31.551 [2024-12-06 13:56:30.717575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.551 [2024-12-06 13:56:30.717609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.551 [2024-12-06 13:56:30.717620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.551 [2024-12-06 13:56:30.722478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.551 [2024-12-06 13:56:30.722525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.551 [2024-12-06 13:56:30.722536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:31.551 [2024-12-06 13:56:30.727222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.551 [2024-12-06 13:56:30.727271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.551 [2024-12-06 13:56:30.727285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:31.551 [2024-12-06 13:56:30.731837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.551 [2024-12-06 13:56:30.731868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.551 [2024-12-06 13:56:30.731879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:31.551 [2024-12-06 13:56:30.736277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.551 [2024-12-06 13:56:30.736310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.551 [2024-12-06 13:56:30.736321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.551 [2024-12-06 13:56:30.740708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.551 [2024-12-06 13:56:30.740738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.551 [2024-12-06 13:56:30.740749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:31.551 [2024-12-06 13:56:30.745248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.551 [2024-12-06 13:56:30.745281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.551 [2024-12-06 13:56:30.745293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:31.551 [2024-12-06 13:56:30.749564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.551 [2024-12-06 13:56:30.749594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.551 [2024-12-06 13:56:30.749605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:31.551 [2024-12-06 13:56:30.753695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.551 [2024-12-06 13:56:30.753726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.551 [2024-12-06 13:56:30.753740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.551 [2024-12-06 13:56:30.757898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.551 [2024-12-06 13:56:30.757928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.551 [2024-12-06 13:56:30.757938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:31.551 [2024-12-06 13:56:30.762014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.551 [2024-12-06 13:56:30.762045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.551 [2024-12-06 13:56:30.762056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:31.551 [2024-12-06 13:56:30.766155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.551 [2024-12-06 13:56:30.766196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.551 [2024-12-06 13:56:30.766207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:31.551 [2024-12-06 13:56:30.770089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.551 [2024-12-06 13:56:30.770146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.551 [2024-12-06 13:56:30.770158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.551 [2024-12-06 13:56:30.773871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.551 [2024-12-06 13:56:30.773919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.551 [2024-12-06 13:56:30.773930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:31.551 [2024-12-06 13:56:30.777800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.551 [2024-12-06 13:56:30.777832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.551 [2024-12-06 13:56:30.777844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:31.551 [2024-12-06 13:56:30.781777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.551 [2024-12-06 13:56:30.781808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.551 [2024-12-06 13:56:30.781818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:31.551 [2024-12-06 13:56:30.785848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.551 [2024-12-06 13:56:30.785883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.551 [2024-12-06 13:56:30.785894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.551 [2024-12-06 13:56:30.789764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.551 [2024-12-06 13:56:30.789795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.551 [2024-12-06 13:56:30.789805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:31.551 [2024-12-06 13:56:30.793454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.551 [2024-12-06 13:56:30.793485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.551 [2024-12-06 13:56:30.793495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:31.551 [2024-12-06 13:56:30.797209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.551 [2024-12-06 13:56:30.797240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.551 [2024-12-06 13:56:30.797268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:31.551 [2024-12-06 13:56:30.801106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.551 [2024-12-06 13:56:30.801145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.552 [2024-12-06 13:56:30.801156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.552 [2024-12-06 13:56:30.805179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.552 [2024-12-06 13:56:30.805209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.552 [2024-12-06 13:56:30.805221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:31.552 [2024-12-06 13:56:30.809290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.552 [2024-12-06 13:56:30.809353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.552 [2024-12-06 13:56:30.809364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:31.552 [2024-12-06 13:56:30.813184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.552 [2024-12-06 13:56:30.813215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.552 [2024-12-06 13:56:30.813225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:31.552 [2024-12-06 13:56:30.816991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.552 [2024-12-06 13:56:30.817021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.552 [2024-12-06 13:56:30.817032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.552 [2024-12-06 13:56:30.820944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.552 [2024-12-06 13:56:30.820975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.552 [2024-12-06 13:56:30.820986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:31.552 [2024-12-06 13:56:30.825100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.552 [2024-12-06 13:56:30.825139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.552 [2024-12-06 13:56:30.825150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:31.552 [2024-12-06 13:56:30.829121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.552 [2024-12-06 13:56:30.829159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.552 [2024-12-06 13:56:30.829171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:31.552 [2024-12-06 13:56:30.833047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.552 [2024-12-06 13:56:30.833078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.552 [2024-12-06 13:56:30.833089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.552 [2024-12-06 13:56:30.836893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.552 [2024-12-06 13:56:30.836924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.552 [2024-12-06 13:56:30.836934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:31.552 [2024-12-06 13:56:30.840970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.552 [2024-12-06 13:56:30.841002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.552 [2024-12-06 13:56:30.841013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:31.552 [2024-12-06 13:56:30.845015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.552 [2024-12-06 13:56:30.845046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.552 [2024-12-06 13:56:30.845056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:31.552 [2024-12-06 13:56:30.848942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.552 [2024-12-06 13:56:30.848973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.552 [2024-12-06 13:56:30.848984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.552 [2024-12-06 13:56:30.853027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.552 [2024-12-06 13:56:30.853057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.552 [2024-12-06 13:56:30.853068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:31.552 [2024-12-06 13:56:30.856838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.552 [2024-12-06 13:56:30.856872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.552 [2024-12-06 13:56:30.856883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:31.552 [2024-12-06 13:56:30.860809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.552 [2024-12-06 13:56:30.860841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.552 [2024-12-06 13:56:30.860853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:31.552 [2024-12-06 13:56:30.864772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.552 [2024-12-06 13:56:30.864806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.552 [2024-12-06 13:56:30.864818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.552 [2024-12-06 13:56:30.868770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.552 [2024-12-06 13:56:30.868802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.552 [2024-12-06 13:56:30.868814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:31.552 [2024-12-06 13:56:30.872634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.552 [2024-12-06 13:56:30.872664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.552 [2024-12-06 13:56:30.872675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:31.552 [2024-12-06 13:56:30.876380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.552 [2024-12-06 13:56:30.876412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.552 [2024-12-06 13:56:30.876423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:31.552 [2024-12-06 13:56:30.880277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.552 [2024-12-06 13:56:30.880307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.552 [2024-12-06 13:56:30.880318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.552 [2024-12-06 13:56:30.884223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.552 [2024-12-06 13:56:30.884254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.552 [2024-12-06 13:56:30.884265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:31.552 [2024-12-06 13:56:30.888293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.552 [2024-12-06 13:56:30.888324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.552 [2024-12-06 13:56:30.888336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:31.552 [2024-12-06 13:56:30.892498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.552 [2024-12-06 13:56:30.892537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.552 [2024-12-06 13:56:30.892565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:31.552 [2024-12-06 13:56:30.896647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.552 [2024-12-06 13:56:30.896695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.552 [2024-12-06 13:56:30.896707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.552 [2024-12-06 13:56:30.901083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.552 [2024-12-06 13:56:30.901126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.552 [2024-12-06 13:56:30.901138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:31.552 [2024-12-06 13:56:30.905113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.552 [2024-12-06 13:56:30.905153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.552 [2024-12-06 13:56:30.905165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:31.552 [2024-12-06 13:56:30.909753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.552 [2024-12-06 13:56:30.909789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.552 [2024-12-06 13:56:30.909803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:31.553 [2024-12-06 13:56:30.914562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.553 [2024-12-06 13:56:30.914620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.553 [2024-12-06 13:56:30.914632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.553 [2024-12-06 13:56:30.919033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.553 [2024-12-06 13:56:30.919066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.553 [2024-12-06 13:56:30.919077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:31.553 [2024-12-06 13:56:30.923389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.553 [2024-12-06 13:56:30.923423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.553 [2024-12-06 13:56:30.923450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:31.553 [2024-12-06 13:56:30.927763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.553 [2024-12-06 13:56:30.927794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.553 [2024-12-06 13:56:30.927805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:31.553 [2024-12-06 13:56:30.932015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.553 [2024-12-06 13:56:30.932047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.553 [2024-12-06 13:56:30.932058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.553 [2024-12-06 13:56:30.936111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.553 [2024-12-06 13:56:30.936152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.553 [2024-12-06 13:56:30.936164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:31.553 [2024-12-06 13:56:30.940168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.553 [2024-12-06 13:56:30.940197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.553 [2024-12-06 13:56:30.940225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:31.553 [2024-12-06 13:56:30.944289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.553 [2024-12-06 13:56:30.944320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.553 [2024-12-06 13:56:30.944331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:31.553 [2024-12-06 13:56:30.948268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.553 [2024-12-06 13:56:30.948299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.553 [2024-12-06 13:56:30.948327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.814 [2024-12-06 13:56:30.952496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.814 [2024-12-06 13:56:30.952543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.814 [2024-12-06 13:56:30.952554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:31.814 [2024-12-06 13:56:30.956716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.814 [2024-12-06 13:56:30.956751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.814 [2024-12-06 13:56:30.956763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:31.814 [2024-12-06 13:56:30.960772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.814 [2024-12-06 13:56:30.960802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.814 [2024-12-06 13:56:30.960812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:31.814 [2024-12-06 13:56:30.964887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.814 [2024-12-06 13:56:30.964918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.814 [2024-12-06 13:56:30.964928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.814 [2024-12-06 13:56:30.969040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.814 [2024-12-06 13:56:30.969072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.814 [2024-12-06 13:56:30.969084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:31.814 [2024-12-06 13:56:30.973215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.814 [2024-12-06 13:56:30.973246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.814 [2024-12-06 13:56:30.973258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:31.814 [2024-12-06 13:56:30.977216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.814 [2024-12-06 13:56:30.977249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.814 [2024-12-06 13:56:30.977260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:31.814 [2024-12-06 13:56:30.981313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.814 [2024-12-06 13:56:30.981345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.814 [2024-12-06 13:56:30.981355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.814 [2024-12-06 13:56:30.985536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.814 [2024-12-06 13:56:30.985569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.814 [2024-12-06 13:56:30.985581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:31.814 [2024-12-06 13:56:30.989744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.814 [2024-12-06 13:56:30.989775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.814 [2024-12-06 13:56:30.989785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:31.814 [2024-12-06 13:56:30.993899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.814 [2024-12-06 13:56:30.993931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.814 [2024-12-06 13:56:30.993942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:31.814 [2024-12-06 13:56:30.997984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.814 [2024-12-06 13:56:30.998025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.814 [2024-12-06 13:56:30.998036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.814 [2024-12-06 13:56:31.001934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.814 [2024-12-06 13:56:31.001977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.814 [2024-12-06 13:56:31.001988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:31.814 [2024-12-06 13:56:31.005977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.814 [2024-12-06 13:56:31.006008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.814 [2024-12-06 13:56:31.006019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:31.814 [2024-12-06 13:56:31.010096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.814 [2024-12-06 13:56:31.010152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.814 [2024-12-06 13:56:31.010180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:31.814 [2024-12-06 13:56:31.013985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.814 [2024-12-06 13:56:31.014017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.814 [2024-12-06 13:56:31.014028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.814 [2024-12-06 13:56:31.018107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.814 [2024-12-06 13:56:31.018164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.814 [2024-12-06 13:56:31.018175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:31.814 [2024-12-06 13:56:31.022098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.814 [2024-12-06 13:56:31.022138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.814 [2024-12-06 13:56:31.022149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:31.814 [2024-12-06 13:56:31.025974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.814 [2024-12-06 13:56:31.026055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.814 [2024-12-06 13:56:31.026065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:31.814 [2024-12-06 13:56:31.030200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.814 [2024-12-06 13:56:31.030242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.814 [2024-12-06 13:56:31.030254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.814 [2024-12-06 13:56:31.034847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.814 [2024-12-06 13:56:31.034879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.814 [2024-12-06 13:56:31.034907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:31.814 [2024-12-06 13:56:31.039457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.814 [2024-12-06 13:56:31.039490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.814 [2024-12-06 13:56:31.039502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:31.814 [2024-12-06 13:56:31.043866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.814 [2024-12-06 13:56:31.043898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.814 [2024-12-06 13:56:31.043909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:31.814 [2024-12-06 13:56:31.048458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.814 [2024-12-06 13:56:31.048492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.814 [2024-12-06 13:56:31.048506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.814 [2024-12-06 13:56:31.053089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.814 [2024-12-06 13:56:31.053166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.814 [2024-12-06 13:56:31.053179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:31.815 [2024-12-06 13:56:31.057917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.815 [2024-12-06 13:56:31.057949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.815 [2024-12-06 13:56:31.057960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:31.815 [2024-12-06 13:56:31.062601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.815 [2024-12-06 13:56:31.062633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.815 [2024-12-06 13:56:31.062659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:31.815 [2024-12-06 13:56:31.067227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.815 [2024-12-06 13:56:31.067271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.815 [2024-12-06 13:56:31.067282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.815 [2024-12-06 13:56:31.071881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.815 [2024-12-06 13:56:31.071929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.815 [2024-12-06 13:56:31.071957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:31.815 [2024-12-06 13:56:31.076594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.815 [2024-12-06 13:56:31.076626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.815 [2024-12-06 13:56:31.076637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:31.815 [2024-12-06 13:56:31.080938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.815 [2024-12-06 13:56:31.080971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.815 [2024-12-06 13:56:31.080999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:31.815 [2024-12-06 13:56:31.085303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.815 [2024-12-06 13:56:31.085336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.815 [2024-12-06 13:56:31.085347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.815 [2024-12-06 13:56:31.089598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.815 [2024-12-06 13:56:31.089631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.815 [2024-12-06 13:56:31.089642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:31.815 [2024-12-06 13:56:31.094097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.815 [2024-12-06 13:56:31.094171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.815 [2024-12-06 13:56:31.094185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:31.815 [2024-12-06 13:56:31.098427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.815 [2024-12-06 13:56:31.098462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.815 [2024-12-06 13:56:31.098474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:31.815 [2024-12-06 13:56:31.102653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.815 [2024-12-06 13:56:31.102686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.815 [2024-12-06 13:56:31.102697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.815 [2024-12-06 13:56:31.106909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.815 [2024-12-06 13:56:31.106941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.815 [2024-12-06 13:56:31.106952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:31.815 [2024-12-06 13:56:31.111599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.815 [2024-12-06 13:56:31.111635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.815 [2024-12-06 13:56:31.111648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:31.815 [2024-12-06 13:56:31.115837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.815 [2024-12-06 13:56:31.115871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.815 [2024-12-06 13:56:31.115882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:31.815 [2024-12-06 13:56:31.120016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.815 [2024-12-06 13:56:31.120049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.815 [2024-12-06 13:56:31.120060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.815 [2024-12-06 13:56:31.124054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.815 [2024-12-06 13:56:31.124087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.815 [2024-12-06 13:56:31.124109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:31.815 [2024-12-06 13:56:31.128432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.815 [2024-12-06 13:56:31.128476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.815 [2024-12-06 13:56:31.128489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:31.815 [2024-12-06 13:56:31.132642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.815 [2024-12-06 13:56:31.132726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.815 [2024-12-06 13:56:31.132738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:31.815 [2024-12-06 13:56:31.137031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.815 [2024-12-06 13:56:31.137066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.815 [2024-12-06 13:56:31.137092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.815 [2024-12-06 13:56:31.141379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.815 [2024-12-06 13:56:31.141413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.815 [2024-12-06 13:56:31.141426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:31.815 [2024-12-06 13:56:31.145641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.815 [2024-12-06 13:56:31.145674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.815 [2024-12-06 13:56:31.145685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:31.815 [2024-12-06 13:56:31.149978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.815 [2024-12-06 13:56:31.150012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.815 [2024-12-06 13:56:31.150024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:31.815 [2024-12-06 13:56:31.154467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.815 [2024-12-06 13:56:31.154499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.815 [2024-12-06 13:56:31.154525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.815 [2024-12-06 13:56:31.158805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.815 [2024-12-06 13:56:31.158841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.815 [2024-12-06 13:56:31.158854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:31.815 [2024-12-06 13:56:31.163125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.815 [2024-12-06 13:56:31.163166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.815 [2024-12-06 13:56:31.163178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:31.815 [2024-12-06 13:56:31.167275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.815 [2024-12-06 13:56:31.167306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.815 [2024-12-06 13:56:31.167317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:31.815 [2024-12-06 13:56:31.171587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.815 [2024-12-06 13:56:31.171621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.815 [2024-12-06 13:56:31.171632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.815 [2024-12-06 13:56:31.176255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.816 [2024-12-06 13:56:31.176286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.816 [2024-12-06 13:56:31.176298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:31.816 [2024-12-06 13:56:31.180377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.816 [2024-12-06 13:56:31.180408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.816 [2024-12-06 13:56:31.180418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:31.816 [2024-12-06 13:56:31.184516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.816 [2024-12-06 13:56:31.184548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.816 [2024-12-06 13:56:31.184559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:31.816 [2024-12-06 13:56:31.188981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.816 [2024-12-06 13:56:31.189014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.816 [2024-12-06 13:56:31.189042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.816 [2024-12-06 13:56:31.193380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.816 [2024-12-06 13:56:31.193411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.816 [2024-12-06 13:56:31.193422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:31.816 [2024-12-06 13:56:31.197426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.816 [2024-12-06 13:56:31.197457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.816 [2024-12-06 13:56:31.197468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:31.816 [2024-12-06 13:56:31.201901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.816 [2024-12-06 13:56:31.201932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.816 [2024-12-06 13:56:31.201944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:31.816 [2024-12-06 13:56:31.206156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.816 [2024-12-06 13:56:31.206201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.816 [2024-12-06 13:56:31.206213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.816 [2024-12-06 13:56:31.210522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:31.816 [2024-12-06 13:56:31.210564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.816 [2024-12-06 13:56:31.210575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:32.133 [2024-12-06 13:56:31.214890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:32.133 [2024-12-06 13:56:31.214921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.133 [2024-12-06 13:56:31.214933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:32.133 [2024-12-06 13:56:31.219149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:32.133 [2024-12-06 13:56:31.219193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.133 [2024-12-06 13:56:31.219205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:32.133 [2024-12-06 13:56:31.223309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:32.133 [2024-12-06 13:56:31.223366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.133 [2024-12-06 13:56:31.223379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.133 [2024-12-06 13:56:31.227320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:32.133 [2024-12-06 13:56:31.227386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.133 [2024-12-06 13:56:31.227398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:32.133 [2024-12-06 13:56:31.231403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:32.133 [2024-12-06 13:56:31.231446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.133 [2024-12-06 13:56:31.231457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:32.133 [2024-12-06 13:56:31.235638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:32.133 [2024-12-06 13:56:31.235688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.133 [2024-12-06 13:56:31.235700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:32.133 [2024-12-06 13:56:31.239852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:32.133 [2024-12-06 13:56:31.239883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.133 [2024-12-06 13:56:31.239895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.133 [2024-12-06 13:56:31.243991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:32.134 [2024-12-06 13:56:31.244023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.134 [2024-12-06 13:56:31.244033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:32.134 [2024-12-06 13:56:31.248068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:32.134 [2024-12-06 13:56:31.248110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.134 [2024-12-06 13:56:31.248122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:32.134 [2024-12-06 13:56:31.252281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:32.134 [2024-12-06 13:56:31.252311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.134 [2024-12-06 13:56:31.252322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:32.134 [2024-12-06 13:56:31.256613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c10660) 00:17:32.134 [2024-12-06 13:56:31.256676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.134 [2024-12-06 13:56:31.256687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.134 7502.00 IOPS, 937.75 MiB/s 00:17:32.134 Latency(us) 00:17:32.134 [2024-12-06T13:56:31.538Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:32.134 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:17:32.134 nvme0n1 : 2.00 7499.28 937.41 0.00 0.00 2130.38 1705.43 5659.93 00:17:32.134 [2024-12-06T13:56:31.538Z] =================================================================================================================== 00:17:32.134 [2024-12-06T13:56:31.538Z] Total : 7499.28 937.41 0.00 0.00 2130.38 1705.43 5659.93 00:17:32.134 { 00:17:32.134 "results": [ 00:17:32.134 { 00:17:32.134 "job": "nvme0n1", 00:17:32.134 "core_mask": "0x2", 00:17:32.134 "workload": "randread", 00:17:32.134 "status": "finished", 00:17:32.134 "queue_depth": 16, 00:17:32.134 "io_size": 131072, 00:17:32.134 "runtime": 2.002858, 00:17:32.134 "iops": 7499.283523844426, 00:17:32.134 "mibps": 937.4104404805532, 00:17:32.134 "io_failed": 0, 00:17:32.134 "io_timeout": 0, 00:17:32.134 "avg_latency_us": 2130.3803723520155, 00:17:32.134 "min_latency_us": 1705.4254545454546, 00:17:32.134 "max_latency_us": 5659.927272727273 00:17:32.134 } 00:17:32.134 ], 00:17:32.134 "core_count": 1 00:17:32.134 } 00:17:32.134 13:56:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:32.134 13:56:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:32.134 13:56:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:32.134 13:56:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:32.134 | .driver_specific 00:17:32.134 | .nvme_error 00:17:32.134 | .status_code 00:17:32.134 | .command_transient_transport_error' 00:17:32.393 13:56:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 484 > 0 )) 00:17:32.393 13:56:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80238 00:17:32.393 13:56:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80238 ']' 00:17:32.393 13:56:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80238 00:17:32.393 13:56:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:17:32.393 13:56:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:32.393 13:56:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80238 00:17:32.393 13:56:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:32.393 13:56:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:32.393 killing process with pid 80238 00:17:32.393 13:56:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80238' 00:17:32.393 Received shutdown signal, test time was about 2.000000 seconds 00:17:32.393 00:17:32.393 Latency(us) 00:17:32.393 [2024-12-06T13:56:31.797Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:32.393 [2024-12-06T13:56:31.797Z] =================================================================================================================== 00:17:32.393 [2024-12-06T13:56:31.797Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:32.393 13:56:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80238 00:17:32.393 13:56:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80238 00:17:32.393 13:56:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:17:32.393 13:56:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:17:32.393 13:56:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:17:32.393 13:56:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:17:32.393 13:56:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:17:32.393 13:56:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:17:32.393 13:56:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80300 00:17:32.393 13:56:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80300 /var/tmp/bperf.sock 00:17:32.393 13:56:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80300 ']' 00:17:32.393 13:56:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:32.393 13:56:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:32.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:32.393 13:56:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:32.393 13:56:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:32.653 13:56:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:32.653 [2024-12-06 13:56:31.836278] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:17:32.653 [2024-12-06 13:56:31.836348] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80300 ] 00:17:32.653 [2024-12-06 13:56:31.974931] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:32.653 [2024-12-06 13:56:32.020461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:32.912 [2024-12-06 13:56:32.078279] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:33.480 13:56:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:33.480 13:56:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:17:33.480 13:56:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:33.480 13:56:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:33.739 13:56:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:33.739 13:56:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.739 13:56:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:33.739 13:56:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.739 13:56:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:33.739 13:56:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:34.307 nvme0n1 00:17:34.307 13:56:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:17:34.308 13:56:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.308 13:56:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:34.308 13:56:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.308 13:56:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:34.308 13:56:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:34.308 Running I/O for 2 seconds... 00:17:34.308 [2024-12-06 13:56:33.651481] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016efac10 00:17:34.308 [2024-12-06 13:56:33.652884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:12056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.308 [2024-12-06 13:56:33.652919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:34.308 [2024-12-06 13:56:33.665875] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016efb480 00:17:34.308 [2024-12-06 13:56:33.667260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.308 [2024-12-06 13:56:33.667293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.308 [2024-12-06 13:56:33.680428] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016efbcf0 00:17:34.308 [2024-12-06 13:56:33.681688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.308 [2024-12-06 13:56:33.681716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:34.308 [2024-12-06 13:56:33.694791] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016efc560 00:17:34.308 [2024-12-06 13:56:33.696113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:22670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.308 [2024-12-06 13:56:33.696173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:34.308 [2024-12-06 13:56:33.708995] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016efcdd0 00:17:34.568 [2024-12-06 13:56:33.710140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.568 [2024-12-06 13:56:33.710168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:34.568 [2024-12-06 13:56:33.723011] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016efd640 00:17:34.568 [2024-12-06 13:56:33.724286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.568 [2024-12-06 13:56:33.724346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:34.568 [2024-12-06 13:56:33.737322] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016efdeb0 00:17:34.568 [2024-12-06 13:56:33.738514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.568 [2024-12-06 13:56:33.738543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:34.568 [2024-12-06 13:56:33.751781] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016efe720 00:17:34.568 [2024-12-06 13:56:33.752989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.568 [2024-12-06 13:56:33.753017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:34.568 [2024-12-06 13:56:33.766125] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016eff3c8 00:17:34.568 [2024-12-06 13:56:33.767271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.568 [2024-12-06 13:56:33.767298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:34.568 [2024-12-06 13:56:33.787593] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016eff3c8 00:17:34.568 [2024-12-06 13:56:33.790056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.568 [2024-12-06 13:56:33.790086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:34.568 [2024-12-06 13:56:33.803360] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016efe720 00:17:34.568 [2024-12-06 13:56:33.805788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.568 [2024-12-06 13:56:33.805817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:34.568 [2024-12-06 13:56:33.818219] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016efdeb0 00:17:34.568 [2024-12-06 13:56:33.820616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:3430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.568 [2024-12-06 13:56:33.820645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:34.568 [2024-12-06 13:56:33.833142] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016efd640 00:17:34.568 [2024-12-06 13:56:33.835461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:23546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.568 [2024-12-06 13:56:33.835491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:34.568 [2024-12-06 13:56:33.848093] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016efcdd0 00:17:34.568 [2024-12-06 13:56:33.850390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:10109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.568 [2024-12-06 13:56:33.850419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:34.568 [2024-12-06 13:56:33.863417] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016efc560 00:17:34.568 [2024-12-06 13:56:33.865857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:3318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.568 [2024-12-06 13:56:33.865905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:34.568 [2024-12-06 13:56:33.879171] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016efbcf0 00:17:34.569 [2024-12-06 13:56:33.881448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.569 [2024-12-06 13:56:33.881477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:34.569 [2024-12-06 13:56:33.894725] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016efb480 00:17:34.569 [2024-12-06 13:56:33.896968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:21883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.569 [2024-12-06 13:56:33.896997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:34.569 [2024-12-06 13:56:33.909714] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016efac10 00:17:34.569 [2024-12-06 13:56:33.912005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:8242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.569 [2024-12-06 13:56:33.912034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:34.569 [2024-12-06 13:56:33.924586] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016efa3a0 00:17:34.569 [2024-12-06 13:56:33.926702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:13248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.569 [2024-12-06 13:56:33.926732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:34.569 [2024-12-06 13:56:33.938899] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016ef9b30 00:17:34.569 [2024-12-06 13:56:33.941137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:21522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.569 [2024-12-06 13:56:33.941173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:34.569 [2024-12-06 13:56:33.953538] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016ef92c0 00:17:34.569 [2024-12-06 13:56:33.955759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:13663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.569 [2024-12-06 13:56:33.955788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:34.569 [2024-12-06 13:56:33.968888] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016ef8a50 00:17:34.829 [2024-12-06 13:56:33.971288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:8739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.829 [2024-12-06 13:56:33.971316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:34.829 [2024-12-06 13:56:33.985531] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016ef81e0 00:17:34.829 [2024-12-06 13:56:33.987938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:11393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.829 [2024-12-06 13:56:33.987967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:34.829 [2024-12-06 13:56:34.001832] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016ef7970 00:17:34.829 [2024-12-06 13:56:34.004209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:13849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.829 [2024-12-06 13:56:34.004237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:34.829 [2024-12-06 13:56:34.016878] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016ef7100 00:17:34.829 [2024-12-06 13:56:34.019038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:4107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.829 [2024-12-06 13:56:34.019068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:34.829 [2024-12-06 13:56:34.031460] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016ef6890 00:17:34.829 [2024-12-06 13:56:34.033508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:17436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.829 [2024-12-06 13:56:34.033536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:34.829 [2024-12-06 13:56:34.045282] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016ef6020 00:17:34.829 [2024-12-06 13:56:34.047208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.829 [2024-12-06 13:56:34.047237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:34.829 [2024-12-06 13:56:34.059099] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016ef57b0 00:17:34.829 [2024-12-06 13:56:34.060976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:8570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.829 [2024-12-06 13:56:34.061004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:34.829 [2024-12-06 13:56:34.072849] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016ef4f40 00:17:34.830 [2024-12-06 13:56:34.074807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:6607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.830 [2024-12-06 13:56:34.074834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:34.830 [2024-12-06 13:56:34.086646] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016ef46d0 00:17:34.830 [2024-12-06 13:56:34.088400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:13402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.830 [2024-12-06 13:56:34.088428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:34.830 [2024-12-06 13:56:34.099961] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016ef3e60 00:17:34.830 [2024-12-06 13:56:34.101846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:7488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.830 [2024-12-06 13:56:34.101874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:34.830 [2024-12-06 13:56:34.113417] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016ef35f0 00:17:34.830 [2024-12-06 13:56:34.115206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:12905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.830 [2024-12-06 13:56:34.115234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:34.830 [2024-12-06 13:56:34.126863] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016ef2d80 00:17:34.830 [2024-12-06 13:56:34.128589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.830 [2024-12-06 13:56:34.128616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:34.830 [2024-12-06 13:56:34.140372] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016ef2510 00:17:34.830 [2024-12-06 13:56:34.142091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:1675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.830 [2024-12-06 13:56:34.142131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:34.830 [2024-12-06 13:56:34.153365] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016ef1ca0 00:17:34.830 [2024-12-06 13:56:34.154975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:21313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.830 [2024-12-06 13:56:34.155202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:34.830 [2024-12-06 13:56:34.167086] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016ef1430 00:17:34.830 [2024-12-06 13:56:34.168972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:2444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.830 [2024-12-06 13:56:34.169005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:34.830 [2024-12-06 13:56:34.180759] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016ef0bc0 00:17:34.830 [2024-12-06 13:56:34.182502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:19346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.830 [2024-12-06 13:56:34.182551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:34.830 [2024-12-06 13:56:34.194203] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016ef0350 00:17:34.830 [2024-12-06 13:56:34.195851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:25038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.830 [2024-12-06 13:56:34.195884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:34.830 [2024-12-06 13:56:34.207948] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016eefae0 00:17:34.830 [2024-12-06 13:56:34.209779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:16191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.830 [2024-12-06 13:56:34.209805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:34.830 [2024-12-06 13:56:34.221536] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016eef270 00:17:34.830 [2024-12-06 13:56:34.223194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:12794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.830 [2024-12-06 13:56:34.223225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:35.090 [2024-12-06 13:56:34.234889] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016eeea00 00:17:35.090 [2024-12-06 13:56:34.236706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:25063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.090 [2024-12-06 13:56:34.236733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:35.090 [2024-12-06 13:56:34.248649] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016eee190 00:17:35.090 [2024-12-06 13:56:34.250250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:24239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.090 [2024-12-06 13:56:34.250415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:35.090 [2024-12-06 13:56:34.262120] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016eed920 00:17:35.090 [2024-12-06 13:56:34.263584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:19605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.090 [2024-12-06 13:56:34.263618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:35.090 [2024-12-06 13:56:34.275430] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016eed0b0 00:17:35.090 [2024-12-06 13:56:34.276996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:19253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.090 [2024-12-06 13:56:34.277188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:35.090 [2024-12-06 13:56:34.289050] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016eec840 00:17:35.090 [2024-12-06 13:56:34.290731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:9208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.090 [2024-12-06 13:56:34.290763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:35.090 [2024-12-06 13:56:34.302440] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016eebfd0 00:17:35.091 [2024-12-06 13:56:34.303961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:7467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.091 [2024-12-06 13:56:34.303994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:35.091 [2024-12-06 13:56:34.316007] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016eeb760 00:17:35.091 [2024-12-06 13:56:34.317703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:15755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.091 [2024-12-06 13:56:34.317735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:35.091 [2024-12-06 13:56:34.329559] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016eeaef0 00:17:35.091 [2024-12-06 13:56:34.331543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:23349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.091 [2024-12-06 13:56:34.331575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:35.091 [2024-12-06 13:56:34.343440] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016eea680 00:17:35.091 [2024-12-06 13:56:34.345132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:14281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.091 [2024-12-06 13:56:34.345163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:35.091 [2024-12-06 13:56:34.357087] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016ee9e10 00:17:35.091 [2024-12-06 13:56:34.358531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:20145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.091 [2024-12-06 13:56:34.358605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:35.091 [2024-12-06 13:56:34.370639] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016ee95a0 00:17:35.091 [2024-12-06 13:56:34.372165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:4443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.091 [2024-12-06 13:56:34.372223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:35.091 [2024-12-06 13:56:34.384474] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016ee8d30 00:17:35.091 [2024-12-06 13:56:34.386235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:8288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.091 [2024-12-06 13:56:34.386266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:35.091 [2024-12-06 13:56:34.398886] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016ee84c0 00:17:35.091 [2024-12-06 13:56:34.400546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:17227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.091 [2024-12-06 13:56:34.400574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:35.091 [2024-12-06 13:56:34.413183] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016ee7c50 00:17:35.091 [2024-12-06 13:56:34.414754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:20508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.091 [2024-12-06 13:56:34.414785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:35.091 [2024-12-06 13:56:34.427073] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016ee73e0 00:17:35.091 [2024-12-06 13:56:34.428668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:12848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.091 [2024-12-06 13:56:34.428834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:35.091 [2024-12-06 13:56:34.441435] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016ee6b70 00:17:35.091 [2024-12-06 13:56:34.442864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.091 [2024-12-06 13:56:34.442898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:35.091 [2024-12-06 13:56:34.455824] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016ee6300 00:17:35.091 [2024-12-06 13:56:34.457330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:14943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.091 [2024-12-06 13:56:34.457357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:35.091 [2024-12-06 13:56:34.470037] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016ee5a90 00:17:35.091 [2024-12-06 13:56:34.471627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:20414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.091 [2024-12-06 13:56:34.471654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:35.091 [2024-12-06 13:56:34.483524] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016ee5220 00:17:35.091 [2024-12-06 13:56:34.484798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:7969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.091 [2024-12-06 13:56:34.484832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:35.351 [2024-12-06 13:56:34.497579] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016ee49b0 00:17:35.351 [2024-12-06 13:56:34.499266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:2552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.351 [2024-12-06 13:56:34.499493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:35.351 [2024-12-06 13:56:34.512268] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016ee4140 00:17:35.351 [2024-12-06 13:56:34.513929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:5335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.351 [2024-12-06 13:56:34.513964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:35.351 [2024-12-06 13:56:34.526691] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016ee38d0 00:17:35.351 [2024-12-06 13:56:34.528318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:23375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.351 [2024-12-06 13:56:34.528361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:35.351 [2024-12-06 13:56:34.540938] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016ee3060 00:17:35.351 [2024-12-06 13:56:34.542378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:7737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.351 [2024-12-06 13:56:34.542534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:35.351 [2024-12-06 13:56:34.555503] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016ee27f0 00:17:35.351 [2024-12-06 13:56:34.556705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:24417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.351 [2024-12-06 13:56:34.556738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:35.351 [2024-12-06 13:56:34.570083] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016ee1f80 00:17:35.351 [2024-12-06 13:56:34.571495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:12405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.351 [2024-12-06 13:56:34.571529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.351 [2024-12-06 13:56:34.584298] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016ee1710 00:17:35.351 [2024-12-06 13:56:34.585439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:18945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.351 [2024-12-06 13:56:34.585596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:35.351 [2024-12-06 13:56:34.599195] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016ee0ea0 00:17:35.351 [2024-12-06 13:56:34.600649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.351 [2024-12-06 13:56:34.600683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:35.351 [2024-12-06 13:56:34.615539] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016ee0630 00:17:35.351 [2024-12-06 13:56:34.616971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.351 [2024-12-06 13:56:34.617003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:35.351 [2024-12-06 13:56:34.630974] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016edfdc0 00:17:35.351 [2024-12-06 13:56:34.633661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.351 [2024-12-06 13:56:34.633694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:35.351 17459.00 IOPS, 68.20 MiB/s [2024-12-06T13:56:34.755Z] [2024-12-06 13:56:34.647440] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016edf550 00:17:35.351 [2024-12-06 13:56:34.648686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:14959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.351 [2024-12-06 13:56:34.648724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:35.351 [2024-12-06 13:56:34.662372] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016edece0 00:17:35.351 [2024-12-06 13:56:34.663602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.351 [2024-12-06 13:56:34.663669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:35.351 [2024-12-06 13:56:34.677480] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016ede470 00:17:35.351 [2024-12-06 13:56:34.678816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.351 [2024-12-06 13:56:34.678850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:35.351 [2024-12-06 13:56:34.698541] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016eddc00 00:17:35.351 [2024-12-06 13:56:34.700956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.351 [2024-12-06 13:56:34.700991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:35.351 [2024-12-06 13:56:34.713404] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016ede470 00:17:35.351 [2024-12-06 13:56:34.716195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.351 [2024-12-06 13:56:34.716258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:35.351 [2024-12-06 13:56:34.728724] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016edece0 00:17:35.351 [2024-12-06 13:56:34.730983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:6870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.351 [2024-12-06 13:56:34.731016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:35.351 [2024-12-06 13:56:34.743917] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016edf550 00:17:35.351 [2024-12-06 13:56:34.746435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:24867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.351 [2024-12-06 13:56:34.746467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:35.611 [2024-12-06 13:56:34.759230] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016edfdc0 00:17:35.611 [2024-12-06 13:56:34.761521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:9933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.611 [2024-12-06 13:56:34.761557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:35.611 [2024-12-06 13:56:34.774158] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016ee0630 00:17:35.611 [2024-12-06 13:56:34.776704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:16424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.611 [2024-12-06 13:56:34.776733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:35.611 [2024-12-06 13:56:34.788461] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016ee0ea0 00:17:35.611 [2024-12-06 13:56:34.790576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:20391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.611 [2024-12-06 13:56:34.790789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:35.611 [2024-12-06 13:56:34.803014] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016ee1710 00:17:35.611 [2024-12-06 13:56:34.805242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:13901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.611 [2024-12-06 13:56:34.805277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:35.611 [2024-12-06 13:56:34.818686] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016ee1f80 00:17:35.611 [2024-12-06 13:56:34.821373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:20815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.611 [2024-12-06 13:56:34.821410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:35.611 [2024-12-06 13:56:34.834877] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016ee27f0 00:17:35.611 [2024-12-06 13:56:34.837441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:19933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.611 [2024-12-06 13:56:34.837479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:35.611 [2024-12-06 13:56:34.850690] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016ee3060 00:17:35.611 [2024-12-06 13:56:34.852811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:6879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.611 [2024-12-06 13:56:34.852844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:35.611 [2024-12-06 13:56:34.865127] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016ee38d0 00:17:35.611 [2024-12-06 13:56:34.867186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:16269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.611 [2024-12-06 13:56:34.867216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:35.611 [2024-12-06 13:56:34.879081] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016ee4140 00:17:35.611 [2024-12-06 13:56:34.881284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:10446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.611 [2024-12-06 13:56:34.881316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:35.611 [2024-12-06 13:56:34.893072] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016ee49b0 00:17:35.611 [2024-12-06 13:56:34.895129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:1574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.611 [2024-12-06 13:56:34.895166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:35.611 [2024-12-06 13:56:34.907043] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016ee5220 00:17:35.611 [2024-12-06 13:56:34.909229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:6694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.611 [2024-12-06 13:56:34.909260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:35.611 [2024-12-06 13:56:34.920726] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016ee5a90 00:17:35.611 [2024-12-06 13:56:34.922797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:5433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.611 [2024-12-06 13:56:34.922955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:35.611 [2024-12-06 13:56:34.934961] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016ee6300 00:17:35.611 [2024-12-06 13:56:34.937084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:3243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.611 [2024-12-06 13:56:34.937156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:35.611 [2024-12-06 13:56:34.949330] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016ee6b70 00:17:35.611 [2024-12-06 13:56:34.951175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:8910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.611 [2024-12-06 13:56:34.951236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:35.611 [2024-12-06 13:56:34.963615] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016ee73e0 00:17:35.611 [2024-12-06 13:56:34.965688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:3094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.611 [2024-12-06 13:56:34.965715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:35.612 [2024-12-06 13:56:34.977648] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016ee7c50 00:17:35.612 [2024-12-06 13:56:34.979694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:4117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.612 [2024-12-06 13:56:34.979759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:35.612 [2024-12-06 13:56:34.992480] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016ee84c0 00:17:35.612 [2024-12-06 13:56:34.994608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.612 [2024-12-06 13:56:34.994643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:35.612 [2024-12-06 13:56:35.008979] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016ee8d30 00:17:35.870 [2024-12-06 13:56:35.011177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:6420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.870 [2024-12-06 13:56:35.011392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:35.870 [2024-12-06 13:56:35.024571] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016ee95a0 00:17:35.870 [2024-12-06 13:56:35.026485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:20024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.870 [2024-12-06 13:56:35.026668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:35.870 [2024-12-06 13:56:35.039319] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016ee9e10 00:17:35.870 [2024-12-06 13:56:35.041258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:24408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.870 [2024-12-06 13:56:35.041290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:35.870 [2024-12-06 13:56:35.055111] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016eea680 00:17:35.870 [2024-12-06 13:56:35.057383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:3255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.870 [2024-12-06 13:56:35.057458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:35.870 [2024-12-06 13:56:35.070858] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016eeaef0 00:17:35.870 [2024-12-06 13:56:35.073005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:14245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.870 [2024-12-06 13:56:35.073034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:35.870 [2024-12-06 13:56:35.085695] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016eeb760 00:17:35.870 [2024-12-06 13:56:35.087593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.870 [2024-12-06 13:56:35.087630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:35.870 [2024-12-06 13:56:35.100335] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016eebfd0 00:17:35.870 [2024-12-06 13:56:35.102171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:8871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.870 [2024-12-06 13:56:35.102349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:35.870 [2024-12-06 13:56:35.115243] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016eec840 00:17:35.870 [2024-12-06 13:56:35.117031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:1880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.870 [2024-12-06 13:56:35.117082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:35.870 [2024-12-06 13:56:35.130202] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016eed0b0 00:17:35.870 [2024-12-06 13:56:35.131910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:7327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.870 [2024-12-06 13:56:35.132080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:35.870 [2024-12-06 13:56:35.144477] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016eed920 00:17:35.870 [2024-12-06 13:56:35.146251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:3051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.870 [2024-12-06 13:56:35.146285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:35.870 [2024-12-06 13:56:35.159293] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016eee190 00:17:35.870 [2024-12-06 13:56:35.161000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.870 [2024-12-06 13:56:35.161035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:35.870 [2024-12-06 13:56:35.174370] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016eeea00 00:17:35.870 [2024-12-06 13:56:35.176135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.871 [2024-12-06 13:56:35.176187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:35.871 [2024-12-06 13:56:35.189596] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016eef270 00:17:35.871 [2024-12-06 13:56:35.191357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:6032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.871 [2024-12-06 13:56:35.191392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:35.871 [2024-12-06 13:56:35.204391] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016eefae0 00:17:35.871 [2024-12-06 13:56:35.206148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:21310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.871 [2024-12-06 13:56:35.206194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:35.871 [2024-12-06 13:56:35.218867] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016ef0350 00:17:35.871 [2024-12-06 13:56:35.220495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.871 [2024-12-06 13:56:35.220713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:35.871 [2024-12-06 13:56:35.233151] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016ef0bc0 00:17:35.871 [2024-12-06 13:56:35.234718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:3961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.871 [2024-12-06 13:56:35.234750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:35.871 [2024-12-06 13:56:35.246538] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016ef1430 00:17:35.871 [2024-12-06 13:56:35.248075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:24436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.871 [2024-12-06 13:56:35.248254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:35.871 [2024-12-06 13:56:35.260951] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016ef1ca0 00:17:35.871 [2024-12-06 13:56:35.262646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:8778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.871 [2024-12-06 13:56:35.262850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:36.129 [2024-12-06 13:56:35.275566] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016ef2510 00:17:36.129 [2024-12-06 13:56:35.277239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:16867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.129 [2024-12-06 13:56:35.277411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:36.129 [2024-12-06 13:56:35.289850] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016ef2d80 00:17:36.129 [2024-12-06 13:56:35.291579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:16652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.129 [2024-12-06 13:56:35.291803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:36.129 [2024-12-06 13:56:35.304450] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016ef35f0 00:17:36.129 [2024-12-06 13:56:35.306108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:20665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.129 [2024-12-06 13:56:35.306322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:36.129 [2024-12-06 13:56:35.319167] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016ef3e60 00:17:36.129 [2024-12-06 13:56:35.320880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.129 [2024-12-06 13:56:35.321051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:36.129 [2024-12-06 13:56:35.333559] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016ef46d0 00:17:36.129 [2024-12-06 13:56:35.335187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:16537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.129 [2024-12-06 13:56:35.335429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:36.129 [2024-12-06 13:56:35.347946] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016ef4f40 00:17:36.129 [2024-12-06 13:56:35.349563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:19688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.129 [2024-12-06 13:56:35.349733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:36.129 [2024-12-06 13:56:35.362102] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016ef57b0 00:17:36.129 [2024-12-06 13:56:35.363721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:5949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.130 [2024-12-06 13:56:35.363894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:36.130 [2024-12-06 13:56:35.376188] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016ef6020 00:17:36.130 [2024-12-06 13:56:35.377731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:4962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.130 [2024-12-06 13:56:35.377902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:36.130 [2024-12-06 13:56:35.390337] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016ef6890 00:17:36.130 [2024-12-06 13:56:35.391872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:22055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.130 [2024-12-06 13:56:35.392106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:36.130 [2024-12-06 13:56:35.404527] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016ef7100 00:17:36.130 [2024-12-06 13:56:35.405993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:8685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.130 [2024-12-06 13:56:35.406213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:36.130 [2024-12-06 13:56:35.418577] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016ef7970 00:17:36.130 [2024-12-06 13:56:35.420212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:20557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.130 [2024-12-06 13:56:35.420385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:36.130 [2024-12-06 13:56:35.432760] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016ef81e0 00:17:36.130 [2024-12-06 13:56:35.434296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:10417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.130 [2024-12-06 13:56:35.434451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:36.130 [2024-12-06 13:56:35.446865] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016ef8a50 00:17:36.130 [2024-12-06 13:56:35.448234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:19153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.130 [2024-12-06 13:56:35.448269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:36.130 [2024-12-06 13:56:35.460514] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016ef92c0 00:17:36.130 [2024-12-06 13:56:35.461831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:6895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.130 [2024-12-06 13:56:35.461864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:36.130 [2024-12-06 13:56:35.473911] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016ef9b30 00:17:36.130 [2024-12-06 13:56:35.475350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:23719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.130 [2024-12-06 13:56:35.475382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:36.130 [2024-12-06 13:56:35.487657] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016efa3a0 00:17:36.130 [2024-12-06 13:56:35.488815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:21566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.130 [2024-12-06 13:56:35.488847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:36.130 [2024-12-06 13:56:35.501055] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016efac10 00:17:36.130 [2024-12-06 13:56:35.502535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:3406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.130 [2024-12-06 13:56:35.502579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:36.130 [2024-12-06 13:56:35.514742] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016efb480 00:17:36.130 [2024-12-06 13:56:35.516004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:7343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.130 [2024-12-06 13:56:35.516200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.130 [2024-12-06 13:56:35.528413] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016efbcf0 00:17:36.130 [2024-12-06 13:56:35.529565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:20089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.130 [2024-12-06 13:56:35.529600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:36.389 [2024-12-06 13:56:35.541745] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016efc560 00:17:36.389 [2024-12-06 13:56:35.542959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:15363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.389 [2024-12-06 13:56:35.542991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:36.389 [2024-12-06 13:56:35.555258] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016efcdd0 00:17:36.389 [2024-12-06 13:56:35.556506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:10435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.389 [2024-12-06 13:56:35.556540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:36.389 [2024-12-06 13:56:35.569084] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016efd640 00:17:36.389 [2024-12-06 13:56:35.570375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:7133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.389 [2024-12-06 13:56:35.570406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:36.390 [2024-12-06 13:56:35.582365] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016efdeb0 00:17:36.390 [2024-12-06 13:56:35.583506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.390 [2024-12-06 13:56:35.583687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:36.390 [2024-12-06 13:56:35.596534] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016efe720 00:17:36.390 [2024-12-06 13:56:35.597829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:12522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.390 [2024-12-06 13:56:35.597878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:36.390 [2024-12-06 13:56:35.610247] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016eff3c8 00:17:36.390 [2024-12-06 13:56:35.611299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.390 [2024-12-06 13:56:35.611356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:36.390 [2024-12-06 13:56:35.629325] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3770) with pdu=0x200016eff3c8 00:17:36.390 [2024-12-06 13:56:35.631433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:36.390 [2024-12-06 13:56:35.631465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:36.390 17521.50 IOPS, 68.44 MiB/s 00:17:36.390 Latency(us) 00:17:36.390 [2024-12-06T13:56:35.794Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:36.390 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:36.390 nvme0n1 : 2.01 17511.79 68.41 0.00 0.00 7302.95 6345.08 28478.37 00:17:36.390 [2024-12-06T13:56:35.794Z] =================================================================================================================== 00:17:36.390 [2024-12-06T13:56:35.794Z] Total : 17511.79 68.41 0.00 0.00 7302.95 6345.08 28478.37 00:17:36.390 { 00:17:36.390 "results": [ 00:17:36.390 { 00:17:36.390 "job": "nvme0n1", 00:17:36.390 "core_mask": "0x2", 00:17:36.390 "workload": "randwrite", 00:17:36.390 "status": "finished", 00:17:36.390 "queue_depth": 128, 00:17:36.390 "io_size": 4096, 00:17:36.390 "runtime": 2.008418, 00:17:36.390 "iops": 17511.792863836115, 00:17:36.390 "mibps": 68.40544087435983, 00:17:36.390 "io_failed": 0, 00:17:36.390 "io_timeout": 0, 00:17:36.390 "avg_latency_us": 7302.946695030255, 00:17:36.390 "min_latency_us": 6345.076363636364, 00:17:36.390 "max_latency_us": 28478.37090909091 00:17:36.390 } 00:17:36.390 ], 00:17:36.390 "core_count": 1 00:17:36.390 } 00:17:36.390 13:56:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:36.390 13:56:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:36.390 13:56:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:36.390 13:56:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:36.390 | .driver_specific 00:17:36.390 | .nvme_error 00:17:36.390 | .status_code 00:17:36.390 | .command_transient_transport_error' 00:17:36.649 13:56:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 137 > 0 )) 00:17:36.649 13:56:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80300 00:17:36.649 13:56:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80300 ']' 00:17:36.649 13:56:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80300 00:17:36.649 13:56:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:17:36.649 13:56:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:36.649 13:56:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80300 00:17:36.649 killing process with pid 80300 00:17:36.649 Received shutdown signal, test time was about 2.000000 seconds 00:17:36.649 00:17:36.649 Latency(us) 00:17:36.649 [2024-12-06T13:56:36.053Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:36.649 [2024-12-06T13:56:36.053Z] =================================================================================================================== 00:17:36.649 [2024-12-06T13:56:36.053Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:36.649 13:56:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:36.649 13:56:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:36.649 13:56:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80300' 00:17:36.649 13:56:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80300 00:17:36.649 13:56:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80300 00:17:36.908 13:56:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:17:36.908 13:56:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:17:36.908 13:56:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:17:36.908 13:56:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:17:36.908 13:56:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:17:36.908 13:56:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80360 00:17:36.908 13:56:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80360 /var/tmp/bperf.sock 00:17:36.908 13:56:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:17:36.908 13:56:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80360 ']' 00:17:36.908 13:56:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:36.908 13:56:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:36.908 13:56:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:36.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:36.908 13:56:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:36.908 13:56:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:36.908 [2024-12-06 13:56:36.239324] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:17:36.908 [2024-12-06 13:56:36.239668] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80360 ] 00:17:36.908 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:36.908 Zero copy mechanism will not be used. 00:17:37.166 [2024-12-06 13:56:36.386300] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:37.166 [2024-12-06 13:56:36.438753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:37.166 [2024-12-06 13:56:36.492882] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:38.104 13:56:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:38.104 13:56:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:17:38.104 13:56:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:38.104 13:56:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:38.104 13:56:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:38.104 13:56:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.104 13:56:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:38.362 13:56:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.362 13:56:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:38.362 13:56:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:38.621 nvme0n1 00:17:38.621 13:56:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:17:38.621 13:56:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.621 13:56:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:38.621 13:56:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.621 13:56:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:38.621 13:56:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:38.621 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:38.621 Zero copy mechanism will not be used. 00:17:38.621 Running I/O for 2 seconds... 00:17:38.621 [2024-12-06 13:56:37.921689] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:38.621 [2024-12-06 13:56:37.921806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.621 [2024-12-06 13:56:37.921835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:38.621 [2024-12-06 13:56:37.926760] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:38.621 [2024-12-06 13:56:37.927035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.621 [2024-12-06 13:56:37.927059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:38.621 [2024-12-06 13:56:37.931704] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:38.621 [2024-12-06 13:56:37.931808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.621 [2024-12-06 13:56:37.931829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:38.621 [2024-12-06 13:56:37.936397] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:38.621 [2024-12-06 13:56:37.936478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.621 [2024-12-06 13:56:37.936499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:38.621 [2024-12-06 13:56:37.940954] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:38.621 [2024-12-06 13:56:37.941087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.621 [2024-12-06 13:56:37.941108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:38.621 [2024-12-06 13:56:37.945806] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:38.621 [2024-12-06 13:56:37.945973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.621 [2024-12-06 13:56:37.945996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:38.621 [2024-12-06 13:56:37.950546] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:38.621 [2024-12-06 13:56:37.950787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.621 [2024-12-06 13:56:37.950809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:38.621 [2024-12-06 13:56:37.955404] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:38.621 [2024-12-06 13:56:37.955482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.621 [2024-12-06 13:56:37.955503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:38.621 [2024-12-06 13:56:37.959924] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:38.621 [2024-12-06 13:56:37.960028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.621 [2024-12-06 13:56:37.960048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:38.621 [2024-12-06 13:56:37.964789] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:38.621 [2024-12-06 13:56:37.964873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.621 [2024-12-06 13:56:37.964896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:38.621 [2024-12-06 13:56:37.969552] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:38.621 [2024-12-06 13:56:37.969774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.622 [2024-12-06 13:56:37.969795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:38.622 [2024-12-06 13:56:37.974369] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:38.622 [2024-12-06 13:56:37.974479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.622 [2024-12-06 13:56:37.974499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:38.622 [2024-12-06 13:56:37.978899] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:38.622 [2024-12-06 13:56:37.978964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.622 [2024-12-06 13:56:37.978985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:38.622 [2024-12-06 13:56:37.983716] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:38.622 [2024-12-06 13:56:37.983780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.622 [2024-12-06 13:56:37.983801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:38.622 [2024-12-06 13:56:37.988387] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:38.622 [2024-12-06 13:56:37.988471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.622 [2024-12-06 13:56:37.988493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:38.622 [2024-12-06 13:56:37.992862] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:38.622 [2024-12-06 13:56:37.993151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.622 [2024-12-06 13:56:37.993188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:38.622 [2024-12-06 13:56:37.997604] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:38.622 [2024-12-06 13:56:37.997677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.622 [2024-12-06 13:56:37.997697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:38.622 [2024-12-06 13:56:38.002193] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:38.622 [2024-12-06 13:56:38.002294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.622 [2024-12-06 13:56:38.002315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:38.622 [2024-12-06 13:56:38.006524] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:38.622 [2024-12-06 13:56:38.006624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.622 [2024-12-06 13:56:38.006645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:38.622 [2024-12-06 13:56:38.011019] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:38.622 [2024-12-06 13:56:38.011102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.622 [2024-12-06 13:56:38.011139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:38.622 [2024-12-06 13:56:38.015693] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:38.622 [2024-12-06 13:56:38.015794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.622 [2024-12-06 13:56:38.015815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:38.622 [2024-12-06 13:56:38.020371] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:38.622 [2024-12-06 13:56:38.020450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.622 [2024-12-06 13:56:38.020470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:38.881 [2024-12-06 13:56:38.024909] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:38.881 [2024-12-06 13:56:38.025172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.881 [2024-12-06 13:56:38.025193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:38.881 [2024-12-06 13:56:38.029827] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:38.881 [2024-12-06 13:56:38.029910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.881 [2024-12-06 13:56:38.029930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:38.881 [2024-12-06 13:56:38.034553] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:38.881 [2024-12-06 13:56:38.034641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.881 [2024-12-06 13:56:38.034663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:38.881 [2024-12-06 13:56:38.039272] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:38.881 [2024-12-06 13:56:38.039417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.881 [2024-12-06 13:56:38.039438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:38.881 [2024-12-06 13:56:38.043875] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:38.881 [2024-12-06 13:56:38.043996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.881 [2024-12-06 13:56:38.044016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:38.881 [2024-12-06 13:56:38.048678] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:38.881 [2024-12-06 13:56:38.048881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.881 [2024-12-06 13:56:38.048902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:38.881 [2024-12-06 13:56:38.053536] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:38.881 [2024-12-06 13:56:38.053618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.881 [2024-12-06 13:56:38.053638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:38.881 [2024-12-06 13:56:38.058142] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:38.881 [2024-12-06 13:56:38.058241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.881 [2024-12-06 13:56:38.058264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:38.881 [2024-12-06 13:56:38.063218] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:38.881 [2024-12-06 13:56:38.063324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.881 [2024-12-06 13:56:38.063372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:38.881 [2024-12-06 13:56:38.068354] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:38.881 [2024-12-06 13:56:38.068421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.881 [2024-12-06 13:56:38.068441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:38.881 [2024-12-06 13:56:38.073793] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:38.881 [2024-12-06 13:56:38.073910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.881 [2024-12-06 13:56:38.073931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:38.881 [2024-12-06 13:56:38.079172] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:38.881 [2024-12-06 13:56:38.079354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.881 [2024-12-06 13:56:38.079411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:38.881 [2024-12-06 13:56:38.084412] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:38.881 [2024-12-06 13:56:38.084496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.881 [2024-12-06 13:56:38.084518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:38.881 [2024-12-06 13:56:38.089377] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:38.881 [2024-12-06 13:56:38.089462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.881 [2024-12-06 13:56:38.089483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:38.881 [2024-12-06 13:56:38.094268] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:38.881 [2024-12-06 13:56:38.094376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.881 [2024-12-06 13:56:38.094398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:38.881 [2024-12-06 13:56:38.099023] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:38.881 [2024-12-06 13:56:38.099144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.881 [2024-12-06 13:56:38.099165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:38.881 [2024-12-06 13:56:38.103788] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:38.881 [2024-12-06 13:56:38.103926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.881 [2024-12-06 13:56:38.103947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:38.881 [2024-12-06 13:56:38.108311] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:38.881 [2024-12-06 13:56:38.108393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.881 [2024-12-06 13:56:38.108413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:38.881 [2024-12-06 13:56:38.112720] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:38.881 [2024-12-06 13:56:38.112935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.881 [2024-12-06 13:56:38.112956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:38.881 [2024-12-06 13:56:38.117497] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:38.881 [2024-12-06 13:56:38.117594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.881 [2024-12-06 13:56:38.117615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:38.881 [2024-12-06 13:56:38.121696] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:38.881 [2024-12-06 13:56:38.121781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.881 [2024-12-06 13:56:38.121802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:38.881 [2024-12-06 13:56:38.126172] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:38.882 [2024-12-06 13:56:38.126255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.882 [2024-12-06 13:56:38.126274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:38.882 [2024-12-06 13:56:38.130550] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:38.882 [2024-12-06 13:56:38.130656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.882 [2024-12-06 13:56:38.130677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:38.882 [2024-12-06 13:56:38.134931] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:38.882 [2024-12-06 13:56:38.135094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.882 [2024-12-06 13:56:38.135116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:38.882 [2024-12-06 13:56:38.139692] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:38.882 [2024-12-06 13:56:38.139876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.882 [2024-12-06 13:56:38.139897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:38.882 [2024-12-06 13:56:38.144427] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:38.882 [2024-12-06 13:56:38.144536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.882 [2024-12-06 13:56:38.144556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:38.882 [2024-12-06 13:56:38.149038] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:38.882 [2024-12-06 13:56:38.149200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.882 [2024-12-06 13:56:38.149223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:38.882 [2024-12-06 13:56:38.153779] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:38.882 [2024-12-06 13:56:38.153886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.882 [2024-12-06 13:56:38.153907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:38.882 [2024-12-06 13:56:38.158356] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:38.882 [2024-12-06 13:56:38.158422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.882 [2024-12-06 13:56:38.158443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:38.882 [2024-12-06 13:56:38.163340] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:38.882 [2024-12-06 13:56:38.163435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.882 [2024-12-06 13:56:38.163456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:38.882 [2024-12-06 13:56:38.168205] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:38.882 [2024-12-06 13:56:38.168316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.882 [2024-12-06 13:56:38.168338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:38.882 [2024-12-06 13:56:38.173073] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:38.882 [2024-12-06 13:56:38.173150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.882 [2024-12-06 13:56:38.173171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:38.882 [2024-12-06 13:56:38.178168] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:38.882 [2024-12-06 13:56:38.178259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.882 [2024-12-06 13:56:38.178280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:38.882 [2024-12-06 13:56:38.183277] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:38.882 [2024-12-06 13:56:38.183376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.882 [2024-12-06 13:56:38.183398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:38.882 [2024-12-06 13:56:38.188223] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:38.882 [2024-12-06 13:56:38.188340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.882 [2024-12-06 13:56:38.188363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:38.882 [2024-12-06 13:56:38.193142] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:38.882 [2024-12-06 13:56:38.193302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.882 [2024-12-06 13:56:38.193324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:38.882 [2024-12-06 13:56:38.197956] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:38.882 [2024-12-06 13:56:38.198078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.882 [2024-12-06 13:56:38.198100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:38.882 [2024-12-06 13:56:38.202785] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:38.882 [2024-12-06 13:56:38.203013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.882 [2024-12-06 13:56:38.203035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:38.882 [2024-12-06 13:56:38.207809] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:38.882 [2024-12-06 13:56:38.207931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.882 [2024-12-06 13:56:38.207952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:38.882 [2024-12-06 13:56:38.212480] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:38.882 [2024-12-06 13:56:38.212556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.882 [2024-12-06 13:56:38.212577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:38.882 [2024-12-06 13:56:38.217110] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:38.882 [2024-12-06 13:56:38.217239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.882 [2024-12-06 13:56:38.217283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:38.882 [2024-12-06 13:56:38.221881] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:38.882 [2024-12-06 13:56:38.222028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.882 [2024-12-06 13:56:38.222049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:38.882 [2024-12-06 13:56:38.226604] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:38.882 [2024-12-06 13:56:38.226810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.882 [2024-12-06 13:56:38.226831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:38.882 [2024-12-06 13:56:38.231494] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:38.882 [2024-12-06 13:56:38.231582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.882 [2024-12-06 13:56:38.231604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:38.882 [2024-12-06 13:56:38.235946] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:38.882 [2024-12-06 13:56:38.236062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.882 [2024-12-06 13:56:38.236084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:38.882 [2024-12-06 13:56:38.240843] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:38.882 [2024-12-06 13:56:38.240909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.882 [2024-12-06 13:56:38.240930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:38.882 [2024-12-06 13:56:38.245802] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:38.882 [2024-12-06 13:56:38.245909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.882 [2024-12-06 13:56:38.245931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:38.882 [2024-12-06 13:56:38.250361] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:38.882 [2024-12-06 13:56:38.250427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.882 [2024-12-06 13:56:38.250448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:38.882 [2024-12-06 13:56:38.254905] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:38.883 [2024-12-06 13:56:38.255223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.883 [2024-12-06 13:56:38.255246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:38.883 [2024-12-06 13:56:38.259878] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:38.883 [2024-12-06 13:56:38.259967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.883 [2024-12-06 13:56:38.259988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:38.883 [2024-12-06 13:56:38.264769] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:38.883 [2024-12-06 13:56:38.264857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.883 [2024-12-06 13:56:38.264893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:38.883 [2024-12-06 13:56:38.269375] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:38.883 [2024-12-06 13:56:38.269483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.883 [2024-12-06 13:56:38.269503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:38.883 [2024-12-06 13:56:38.274190] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:38.883 [2024-12-06 13:56:38.274310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.883 [2024-12-06 13:56:38.274331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:38.883 [2024-12-06 13:56:38.278771] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:38.883 [2024-12-06 13:56:38.278856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.883 [2024-12-06 13:56:38.278877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:39.142 [2024-12-06 13:56:38.283557] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.142 [2024-12-06 13:56:38.283640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.142 [2024-12-06 13:56:38.283660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:39.142 [2024-12-06 13:56:38.288212] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.142 [2024-12-06 13:56:38.288297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.142 [2024-12-06 13:56:38.288318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:39.142 [2024-12-06 13:56:38.292981] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.142 [2024-12-06 13:56:38.293073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.142 [2024-12-06 13:56:38.293112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:39.142 [2024-12-06 13:56:38.297678] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.142 [2024-12-06 13:56:38.297914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.142 [2024-12-06 13:56:38.297935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:39.142 [2024-12-06 13:56:38.302546] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.142 [2024-12-06 13:56:38.302618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.142 [2024-12-06 13:56:38.302654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:39.142 [2024-12-06 13:56:38.307563] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.142 [2024-12-06 13:56:38.307649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.142 [2024-12-06 13:56:38.307685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:39.142 [2024-12-06 13:56:38.312162] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.142 [2024-12-06 13:56:38.312273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.142 [2024-12-06 13:56:38.312294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:39.142 [2024-12-06 13:56:38.316685] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.142 [2024-12-06 13:56:38.316796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.142 [2024-12-06 13:56:38.316818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:39.142 [2024-12-06 13:56:38.321259] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.142 [2024-12-06 13:56:38.321338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.142 [2024-12-06 13:56:38.321359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:39.142 [2024-12-06 13:56:38.326075] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.142 [2024-12-06 13:56:38.326364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.142 [2024-12-06 13:56:38.326387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:39.142 [2024-12-06 13:56:38.330856] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.142 [2024-12-06 13:56:38.330966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.142 [2024-12-06 13:56:38.330987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:39.142 [2024-12-06 13:56:38.335672] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.142 [2024-12-06 13:56:38.335755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.142 [2024-12-06 13:56:38.335792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:39.142 [2024-12-06 13:56:38.340321] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.142 [2024-12-06 13:56:38.340412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.142 [2024-12-06 13:56:38.340433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:39.142 [2024-12-06 13:56:38.344797] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.142 [2024-12-06 13:56:38.344863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.142 [2024-12-06 13:56:38.344884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:39.142 [2024-12-06 13:56:38.349649] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.142 [2024-12-06 13:56:38.349873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.142 [2024-12-06 13:56:38.349894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:39.142 [2024-12-06 13:56:38.354595] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.142 [2024-12-06 13:56:38.354733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.142 [2024-12-06 13:56:38.354753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:39.142 [2024-12-06 13:56:38.359180] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.142 [2024-12-06 13:56:38.359251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.142 [2024-12-06 13:56:38.359272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:39.142 [2024-12-06 13:56:38.364082] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.142 [2024-12-06 13:56:38.364211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.142 [2024-12-06 13:56:38.364232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:39.142 [2024-12-06 13:56:38.368818] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.142 [2024-12-06 13:56:38.368900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.142 [2024-12-06 13:56:38.368921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:39.142 [2024-12-06 13:56:38.373777] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.142 [2024-12-06 13:56:38.374000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.143 [2024-12-06 13:56:38.374020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:39.143 [2024-12-06 13:56:38.378683] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.143 [2024-12-06 13:56:38.378748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.143 [2024-12-06 13:56:38.378768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:39.143 [2024-12-06 13:56:38.383195] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.143 [2024-12-06 13:56:38.383299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.143 [2024-12-06 13:56:38.383319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:39.143 [2024-12-06 13:56:38.388064] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.143 [2024-12-06 13:56:38.388244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.143 [2024-12-06 13:56:38.388268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:39.143 [2024-12-06 13:56:38.392889] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.143 [2024-12-06 13:56:38.392957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.143 [2024-12-06 13:56:38.392979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:39.143 [2024-12-06 13:56:38.397600] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.143 [2024-12-06 13:56:38.397828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.143 [2024-12-06 13:56:38.397873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:39.143 [2024-12-06 13:56:38.402522] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.143 [2024-12-06 13:56:38.402604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.143 [2024-12-06 13:56:38.402624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:39.143 [2024-12-06 13:56:38.407232] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.143 [2024-12-06 13:56:38.407395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.143 [2024-12-06 13:56:38.407417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:39.143 [2024-12-06 13:56:38.411881] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.143 [2024-12-06 13:56:38.411969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.143 [2024-12-06 13:56:38.411989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:39.143 [2024-12-06 13:56:38.416589] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.143 [2024-12-06 13:56:38.416673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.143 [2024-12-06 13:56:38.416695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:39.143 [2024-12-06 13:56:38.421060] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.143 [2024-12-06 13:56:38.421213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.143 [2024-12-06 13:56:38.421251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:39.143 [2024-12-06 13:56:38.425753] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.143 [2024-12-06 13:56:38.425975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.143 [2024-12-06 13:56:38.426014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:39.143 [2024-12-06 13:56:38.430651] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.143 [2024-12-06 13:56:38.430714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.143 [2024-12-06 13:56:38.430735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:39.143 [2024-12-06 13:56:38.435371] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.143 [2024-12-06 13:56:38.435464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.143 [2024-12-06 13:56:38.435486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:39.143 [2024-12-06 13:56:38.440007] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.143 [2024-12-06 13:56:38.440089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.143 [2024-12-06 13:56:38.440110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:39.143 [2024-12-06 13:56:38.444776] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.143 [2024-12-06 13:56:38.444866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.143 [2024-12-06 13:56:38.444903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:39.143 [2024-12-06 13:56:38.449912] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.143 [2024-12-06 13:56:38.450132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.143 [2024-12-06 13:56:38.450154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:39.143 [2024-12-06 13:56:38.454557] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.143 [2024-12-06 13:56:38.454637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.143 [2024-12-06 13:56:38.454658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:39.143 [2024-12-06 13:56:38.459129] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.143 [2024-12-06 13:56:38.459226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.143 [2024-12-06 13:56:38.459247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:39.143 [2024-12-06 13:56:38.463922] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.143 [2024-12-06 13:56:38.463985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.143 [2024-12-06 13:56:38.464005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:39.143 [2024-12-06 13:56:38.468592] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.143 [2024-12-06 13:56:38.468715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.143 [2024-12-06 13:56:38.468737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:39.143 [2024-12-06 13:56:38.472897] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.143 [2024-12-06 13:56:38.473000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.143 [2024-12-06 13:56:38.473020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:39.143 [2024-12-06 13:56:38.477233] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.143 [2024-12-06 13:56:38.477322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.143 [2024-12-06 13:56:38.477342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:39.143 [2024-12-06 13:56:38.481566] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.143 [2024-12-06 13:56:38.481652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.143 [2024-12-06 13:56:38.481679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:39.143 [2024-12-06 13:56:38.486384] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.143 [2024-12-06 13:56:38.486451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.143 [2024-12-06 13:56:38.486471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:39.143 [2024-12-06 13:56:38.491465] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.143 [2024-12-06 13:56:38.491553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.143 [2024-12-06 13:56:38.491575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:39.143 [2024-12-06 13:56:38.496492] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.143 [2024-12-06 13:56:38.496598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.143 [2024-12-06 13:56:38.496620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:39.143 [2024-12-06 13:56:38.501404] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.143 [2024-12-06 13:56:38.501470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.143 [2024-12-06 13:56:38.501492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:39.143 [2024-12-06 13:56:38.506477] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.143 [2024-12-06 13:56:38.506610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.143 [2024-12-06 13:56:38.506632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:39.143 [2024-12-06 13:56:38.511273] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.143 [2024-12-06 13:56:38.511408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.143 [2024-12-06 13:56:38.511430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:39.143 [2024-12-06 13:56:38.515746] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.143 [2024-12-06 13:56:38.515829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.143 [2024-12-06 13:56:38.515850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:39.143 [2024-12-06 13:56:38.519991] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.143 [2024-12-06 13:56:38.520096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.143 [2024-12-06 13:56:38.520146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:39.143 [2024-12-06 13:56:38.524291] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.143 [2024-12-06 13:56:38.524377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.143 [2024-12-06 13:56:38.524396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:39.143 [2024-12-06 13:56:38.528879] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.143 [2024-12-06 13:56:38.528983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.143 [2024-12-06 13:56:38.529005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:39.143 [2024-12-06 13:56:38.533967] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.143 [2024-12-06 13:56:38.534067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.143 [2024-12-06 13:56:38.534109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:39.143 [2024-12-06 13:56:38.538866] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.143 [2024-12-06 13:56:38.538982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.143 [2024-12-06 13:56:38.539003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:39.402 [2024-12-06 13:56:38.544108] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.402 [2024-12-06 13:56:38.544422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.402 [2024-12-06 13:56:38.544446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:39.402 [2024-12-06 13:56:38.549607] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.402 [2024-12-06 13:56:38.549701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.402 [2024-12-06 13:56:38.549724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:39.402 [2024-12-06 13:56:38.555137] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.403 [2024-12-06 13:56:38.555319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.403 [2024-12-06 13:56:38.555354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:39.403 [2024-12-06 13:56:38.560263] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.403 [2024-12-06 13:56:38.560350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.403 [2024-12-06 13:56:38.560373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:39.403 [2024-12-06 13:56:38.565346] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.403 [2024-12-06 13:56:38.565428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.403 [2024-12-06 13:56:38.565465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:39.403 [2024-12-06 13:56:38.570682] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.403 [2024-12-06 13:56:38.570775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.403 [2024-12-06 13:56:38.570799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:39.403 [2024-12-06 13:56:38.576228] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.403 [2024-12-06 13:56:38.576317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.403 [2024-12-06 13:56:38.576340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:39.403 [2024-12-06 13:56:38.581700] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.403 [2024-12-06 13:56:38.581788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.403 [2024-12-06 13:56:38.581812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:39.403 [2024-12-06 13:56:38.586830] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.403 [2024-12-06 13:56:38.586933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.403 [2024-12-06 13:56:38.586969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:39.403 [2024-12-06 13:56:38.591820] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.403 [2024-12-06 13:56:38.592052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.403 [2024-12-06 13:56:38.592091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:39.403 [2024-12-06 13:56:38.597195] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.403 [2024-12-06 13:56:38.597308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.403 [2024-12-06 13:56:38.597329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:39.403 [2024-12-06 13:56:38.601988] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.403 [2024-12-06 13:56:38.602127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.403 [2024-12-06 13:56:38.602166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:39.403 [2024-12-06 13:56:38.606918] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.403 [2024-12-06 13:56:38.607001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.403 [2024-12-06 13:56:38.607022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:39.403 [2024-12-06 13:56:38.611527] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.403 [2024-12-06 13:56:38.611728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.403 [2024-12-06 13:56:38.611749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:39.403 [2024-12-06 13:56:38.616350] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.403 [2024-12-06 13:56:38.616578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.403 [2024-12-06 13:56:38.616860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:39.403 [2024-12-06 13:56:38.620906] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.403 [2024-12-06 13:56:38.621137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.403 [2024-12-06 13:56:38.621291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:39.403 [2024-12-06 13:56:38.625544] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.403 [2024-12-06 13:56:38.625792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.403 [2024-12-06 13:56:38.625947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:39.403 [2024-12-06 13:56:38.630708] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.403 [2024-12-06 13:56:38.630947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.403 [2024-12-06 13:56:38.631119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:39.403 [2024-12-06 13:56:38.636257] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.403 [2024-12-06 13:56:38.636533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.403 [2024-12-06 13:56:38.636853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:39.403 [2024-12-06 13:56:38.641681] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.403 [2024-12-06 13:56:38.641926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.403 [2024-12-06 13:56:38.642219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:39.403 [2024-12-06 13:56:38.647140] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.403 [2024-12-06 13:56:38.647395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.403 [2024-12-06 13:56:38.647420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:39.403 [2024-12-06 13:56:38.652210] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.403 [2024-12-06 13:56:38.652329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.403 [2024-12-06 13:56:38.652351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:39.403 [2024-12-06 13:56:38.657130] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.403 [2024-12-06 13:56:38.657226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.403 [2024-12-06 13:56:38.657249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:39.403 [2024-12-06 13:56:38.662592] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.403 [2024-12-06 13:56:38.662814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.403 [2024-12-06 13:56:38.662837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:39.403 [2024-12-06 13:56:38.668115] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.403 [2024-12-06 13:56:38.668209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.403 [2024-12-06 13:56:38.668231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:39.403 [2024-12-06 13:56:38.672920] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.403 [2024-12-06 13:56:38.673010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.404 [2024-12-06 13:56:38.673032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:39.404 [2024-12-06 13:56:38.677413] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.404 [2024-12-06 13:56:38.677505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.404 [2024-12-06 13:56:38.677526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:39.404 [2024-12-06 13:56:38.682113] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.404 [2024-12-06 13:56:38.682341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.404 [2024-12-06 13:56:38.682363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:39.404 [2024-12-06 13:56:38.687147] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.404 [2024-12-06 13:56:38.687440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.404 [2024-12-06 13:56:38.687724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:39.404 [2024-12-06 13:56:38.691928] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.404 [2024-12-06 13:56:38.692151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.404 [2024-12-06 13:56:38.692302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:39.404 [2024-12-06 13:56:38.696758] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.404 [2024-12-06 13:56:38.697007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.404 [2024-12-06 13:56:38.697338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:39.404 [2024-12-06 13:56:38.701621] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.404 [2024-12-06 13:56:38.701850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.404 [2024-12-06 13:56:38.702004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:39.404 [2024-12-06 13:56:38.706532] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.404 [2024-12-06 13:56:38.706772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.404 [2024-12-06 13:56:38.707052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:39.404 [2024-12-06 13:56:38.711537] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.404 [2024-12-06 13:56:38.711763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.404 [2024-12-06 13:56:38.711911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:39.404 [2024-12-06 13:56:38.716358] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.404 [2024-12-06 13:56:38.716610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.404 [2024-12-06 13:56:38.716835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:39.404 [2024-12-06 13:56:38.721338] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.404 [2024-12-06 13:56:38.721602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.404 [2024-12-06 13:56:38.721626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:39.404 [2024-12-06 13:56:38.726261] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.404 [2024-12-06 13:56:38.726369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.404 [2024-12-06 13:56:38.726391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:39.404 [2024-12-06 13:56:38.730914] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.404 [2024-12-06 13:56:38.730995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.404 [2024-12-06 13:56:38.731033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:39.404 [2024-12-06 13:56:38.735800] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.404 [2024-12-06 13:56:38.735883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.404 [2024-12-06 13:56:38.735904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:39.404 [2024-12-06 13:56:38.740565] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.404 [2024-12-06 13:56:38.740649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.404 [2024-12-06 13:56:38.740670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:39.404 [2024-12-06 13:56:38.745219] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.404 [2024-12-06 13:56:38.745288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.404 [2024-12-06 13:56:38.745310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:39.404 [2024-12-06 13:56:38.749922] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.404 [2024-12-06 13:56:38.750005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.404 [2024-12-06 13:56:38.750026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:39.404 [2024-12-06 13:56:38.754755] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.404 [2024-12-06 13:56:38.754844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.404 [2024-12-06 13:56:38.754866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:39.404 [2024-12-06 13:56:38.759509] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.404 [2024-12-06 13:56:38.759575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.404 [2024-12-06 13:56:38.759595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:39.404 [2024-12-06 13:56:38.764238] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.404 [2024-12-06 13:56:38.764305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.404 [2024-12-06 13:56:38.764327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:39.404 [2024-12-06 13:56:38.769001] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.404 [2024-12-06 13:56:38.769067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.404 [2024-12-06 13:56:38.769088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:39.404 [2024-12-06 13:56:38.773923] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.404 [2024-12-06 13:56:38.774029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.404 [2024-12-06 13:56:38.774051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:39.404 [2024-12-06 13:56:38.778929] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.404 [2024-12-06 13:56:38.779014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.404 [2024-12-06 13:56:38.779036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:39.404 [2024-12-06 13:56:38.783813] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.404 [2024-12-06 13:56:38.784053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.404 [2024-12-06 13:56:38.784074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:39.404 [2024-12-06 13:56:38.788712] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.404 [2024-12-06 13:56:38.788793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.404 [2024-12-06 13:56:38.788812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:39.404 [2024-12-06 13:56:38.793232] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.404 [2024-12-06 13:56:38.793343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.404 [2024-12-06 13:56:38.793363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:39.404 [2024-12-06 13:56:38.797550] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.404 [2024-12-06 13:56:38.797654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.404 [2024-12-06 13:56:38.797674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:39.404 [2024-12-06 13:56:38.801899] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.404 [2024-12-06 13:56:38.802018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.404 [2024-12-06 13:56:38.802038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:39.664 [2024-12-06 13:56:38.806396] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.664 [2024-12-06 13:56:38.806466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.664 [2024-12-06 13:56:38.806486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:39.664 [2024-12-06 13:56:38.810921] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.664 [2024-12-06 13:56:38.810997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.664 [2024-12-06 13:56:38.811018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:39.664 [2024-12-06 13:56:38.815416] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.664 [2024-12-06 13:56:38.815491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.664 [2024-12-06 13:56:38.815511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:39.664 [2024-12-06 13:56:38.819676] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.664 [2024-12-06 13:56:38.819887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.664 [2024-12-06 13:56:38.819907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:39.664 [2024-12-06 13:56:38.824336] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.664 [2024-12-06 13:56:38.824454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.664 [2024-12-06 13:56:38.824475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:39.664 [2024-12-06 13:56:38.828800] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.664 [2024-12-06 13:56:38.828923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.664 [2024-12-06 13:56:38.828950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:39.664 [2024-12-06 13:56:38.833334] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.664 [2024-12-06 13:56:38.833409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.664 [2024-12-06 13:56:38.833429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:39.664 [2024-12-06 13:56:38.837799] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.664 [2024-12-06 13:56:38.837862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.664 [2024-12-06 13:56:38.837882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:39.664 [2024-12-06 13:56:38.842388] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.664 [2024-12-06 13:56:38.842473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.664 [2024-12-06 13:56:38.842493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:39.664 [2024-12-06 13:56:38.846780] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.664 [2024-12-06 13:56:38.846867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.664 [2024-12-06 13:56:38.846888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:39.664 [2024-12-06 13:56:38.851361] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.664 [2024-12-06 13:56:38.851443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.664 [2024-12-06 13:56:38.851463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:39.664 [2024-12-06 13:56:38.855814] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.664 [2024-12-06 13:56:38.856026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.664 [2024-12-06 13:56:38.856047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:39.664 [2024-12-06 13:56:38.860519] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.664 [2024-12-06 13:56:38.860634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.664 [2024-12-06 13:56:38.860671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:39.664 [2024-12-06 13:56:38.864934] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.664 [2024-12-06 13:56:38.865051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.664 [2024-12-06 13:56:38.865072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:39.664 [2024-12-06 13:56:38.869547] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.664 [2024-12-06 13:56:38.869632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.664 [2024-12-06 13:56:38.869652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:39.664 [2024-12-06 13:56:38.874055] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.664 [2024-12-06 13:56:38.874166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.664 [2024-12-06 13:56:38.874187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:39.664 [2024-12-06 13:56:38.878629] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.664 [2024-12-06 13:56:38.878729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.664 [2024-12-06 13:56:38.878759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:39.664 [2024-12-06 13:56:38.883027] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.665 [2024-12-06 13:56:38.883155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.665 [2024-12-06 13:56:38.883177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:39.665 [2024-12-06 13:56:38.887528] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.665 [2024-12-06 13:56:38.887595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.665 [2024-12-06 13:56:38.887616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:39.665 [2024-12-06 13:56:38.892027] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.665 [2024-12-06 13:56:38.892144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.665 [2024-12-06 13:56:38.892165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:39.665 [2024-12-06 13:56:38.896777] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.665 [2024-12-06 13:56:38.896884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.665 [2024-12-06 13:56:38.896923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:39.665 [2024-12-06 13:56:38.901629] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.665 [2024-12-06 13:56:38.901748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.665 [2024-12-06 13:56:38.901769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:39.665 [2024-12-06 13:56:38.906455] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.665 [2024-12-06 13:56:38.906568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.665 [2024-12-06 13:56:38.906589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:39.665 [2024-12-06 13:56:38.911416] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.665 [2024-12-06 13:56:38.911510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.665 [2024-12-06 13:56:38.911557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:39.665 6445.00 IOPS, 805.62 MiB/s [2024-12-06T13:56:39.069Z] [2024-12-06 13:56:38.917966] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.665 [2024-12-06 13:56:38.918123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.665 [2024-12-06 13:56:38.918146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:39.665 [2024-12-06 13:56:38.922880] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.665 [2024-12-06 13:56:38.923133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.665 [2024-12-06 13:56:38.923157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:39.665 [2024-12-06 13:56:38.928179] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.665 [2024-12-06 13:56:38.928305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.665 [2024-12-06 13:56:38.928327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:39.665 [2024-12-06 13:56:38.932831] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.665 [2024-12-06 13:56:38.932917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.665 [2024-12-06 13:56:38.932937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:39.665 [2024-12-06 13:56:38.937793] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.665 [2024-12-06 13:56:38.937942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.665 [2024-12-06 13:56:38.937963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:39.665 [2024-12-06 13:56:38.942675] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.665 [2024-12-06 13:56:38.942884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.665 [2024-12-06 13:56:38.942905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:39.665 [2024-12-06 13:56:38.947575] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.665 [2024-12-06 13:56:38.947659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.665 [2024-12-06 13:56:38.947679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:39.665 [2024-12-06 13:56:38.952000] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.665 [2024-12-06 13:56:38.952104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.665 [2024-12-06 13:56:38.952141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:39.665 [2024-12-06 13:56:38.956434] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.665 [2024-12-06 13:56:38.956508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.665 [2024-12-06 13:56:38.956528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:39.665 [2024-12-06 13:56:38.960922] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.665 [2024-12-06 13:56:38.961034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.665 [2024-12-06 13:56:38.961055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:39.665 [2024-12-06 13:56:38.965610] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.665 [2024-12-06 13:56:38.965713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.665 [2024-12-06 13:56:38.965734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:39.665 [2024-12-06 13:56:38.970089] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.665 [2024-12-06 13:56:38.970294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.665 [2024-12-06 13:56:38.970318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:39.665 [2024-12-06 13:56:38.974609] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.665 [2024-12-06 13:56:38.974675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.665 [2024-12-06 13:56:38.974696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:39.665 [2024-12-06 13:56:38.979112] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.665 [2024-12-06 13:56:38.979371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.665 [2024-12-06 13:56:38.979396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:39.665 [2024-12-06 13:56:38.983876] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.665 [2024-12-06 13:56:38.983958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.665 [2024-12-06 13:56:38.983994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:39.665 [2024-12-06 13:56:38.988514] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.665 [2024-12-06 13:56:38.988597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.665 [2024-12-06 13:56:38.988617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:39.665 [2024-12-06 13:56:38.992924] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.665 [2024-12-06 13:56:38.993034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.665 [2024-12-06 13:56:38.993055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:39.665 [2024-12-06 13:56:38.997560] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.665 [2024-12-06 13:56:38.997647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.665 [2024-12-06 13:56:38.997667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:39.665 [2024-12-06 13:56:39.002193] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.665 [2024-12-06 13:56:39.002317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.665 [2024-12-06 13:56:39.002338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:39.665 [2024-12-06 13:56:39.006685] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.665 [2024-12-06 13:56:39.006924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.665 [2024-12-06 13:56:39.006945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:39.665 [2024-12-06 13:56:39.011492] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.666 [2024-12-06 13:56:39.011586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.666 [2024-12-06 13:56:39.011606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:39.666 [2024-12-06 13:56:39.015970] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.666 [2024-12-06 13:56:39.016069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.666 [2024-12-06 13:56:39.016107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:39.666 [2024-12-06 13:56:39.020552] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.666 [2024-12-06 13:56:39.020635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.666 [2024-12-06 13:56:39.020656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:39.666 [2024-12-06 13:56:39.025017] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.666 [2024-12-06 13:56:39.025149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.666 [2024-12-06 13:56:39.025185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:39.666 [2024-12-06 13:56:39.029573] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.666 [2024-12-06 13:56:39.029673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.666 [2024-12-06 13:56:39.029694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:39.666 [2024-12-06 13:56:39.033945] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.666 [2024-12-06 13:56:39.034055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.666 [2024-12-06 13:56:39.034076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:39.666 [2024-12-06 13:56:39.038483] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.666 [2024-12-06 13:56:39.038709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.666 [2024-12-06 13:56:39.038730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:39.666 [2024-12-06 13:56:39.043092] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.666 [2024-12-06 13:56:39.043263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.666 [2024-12-06 13:56:39.043285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:39.666 [2024-12-06 13:56:39.047789] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.666 [2024-12-06 13:56:39.047883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.666 [2024-12-06 13:56:39.047919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:39.666 [2024-12-06 13:56:39.052396] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.666 [2024-12-06 13:56:39.052478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.666 [2024-12-06 13:56:39.052498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:39.666 [2024-12-06 13:56:39.056769] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.666 [2024-12-06 13:56:39.056925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.666 [2024-12-06 13:56:39.056946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:39.666 [2024-12-06 13:56:39.061268] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.666 [2024-12-06 13:56:39.061400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.666 [2024-12-06 13:56:39.061423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:39.925 [2024-12-06 13:56:39.065748] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.925 [2024-12-06 13:56:39.065985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.925 [2024-12-06 13:56:39.066062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:39.925 [2024-12-06 13:56:39.070628] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.925 [2024-12-06 13:56:39.070739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.925 [2024-12-06 13:56:39.070761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:39.925 [2024-12-06 13:56:39.075046] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.925 [2024-12-06 13:56:39.075158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.925 [2024-12-06 13:56:39.075180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:39.925 [2024-12-06 13:56:39.079570] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.925 [2024-12-06 13:56:39.079672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.925 [2024-12-06 13:56:39.079692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:39.925 [2024-12-06 13:56:39.084098] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.926 [2024-12-06 13:56:39.084226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.926 [2024-12-06 13:56:39.084264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:39.926 [2024-12-06 13:56:39.088879] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.926 [2024-12-06 13:56:39.088978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.926 [2024-12-06 13:56:39.088997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:39.926 [2024-12-06 13:56:39.093634] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.926 [2024-12-06 13:56:39.093883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.926 [2024-12-06 13:56:39.093906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:39.926 [2024-12-06 13:56:39.098777] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.926 [2024-12-06 13:56:39.098857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.926 [2024-12-06 13:56:39.098879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:39.926 [2024-12-06 13:56:39.104052] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.926 [2024-12-06 13:56:39.104174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.926 [2024-12-06 13:56:39.104196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:39.926 [2024-12-06 13:56:39.109261] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.926 [2024-12-06 13:56:39.109327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.926 [2024-12-06 13:56:39.109346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:39.926 [2024-12-06 13:56:39.114229] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.926 [2024-12-06 13:56:39.114336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.926 [2024-12-06 13:56:39.114357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:39.926 [2024-12-06 13:56:39.119204] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.926 [2024-12-06 13:56:39.119382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.926 [2024-12-06 13:56:39.119421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:39.926 [2024-12-06 13:56:39.124151] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.926 [2024-12-06 13:56:39.124279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.926 [2024-12-06 13:56:39.124317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:39.926 [2024-12-06 13:56:39.129182] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.926 [2024-12-06 13:56:39.129282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.926 [2024-12-06 13:56:39.129302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:39.926 [2024-12-06 13:56:39.134105] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.926 [2024-12-06 13:56:39.134209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.926 [2024-12-06 13:56:39.134230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:39.926 [2024-12-06 13:56:39.138951] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.926 [2024-12-06 13:56:39.139067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.926 [2024-12-06 13:56:39.139088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:39.926 [2024-12-06 13:56:39.143573] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.926 [2024-12-06 13:56:39.143815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.926 [2024-12-06 13:56:39.143835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:39.926 [2024-12-06 13:56:39.148441] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.926 [2024-12-06 13:56:39.148529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.926 [2024-12-06 13:56:39.148549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:39.926 [2024-12-06 13:56:39.152881] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.926 [2024-12-06 13:56:39.152985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.926 [2024-12-06 13:56:39.153005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:39.926 [2024-12-06 13:56:39.157517] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.926 [2024-12-06 13:56:39.157601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.926 [2024-12-06 13:56:39.157621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:39.926 [2024-12-06 13:56:39.161804] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.926 [2024-12-06 13:56:39.161908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.926 [2024-12-06 13:56:39.161929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:39.926 [2024-12-06 13:56:39.166316] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.926 [2024-12-06 13:56:39.166380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.926 [2024-12-06 13:56:39.166400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:39.926 [2024-12-06 13:56:39.170883] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.926 [2024-12-06 13:56:39.170996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.926 [2024-12-06 13:56:39.171016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:39.926 [2024-12-06 13:56:39.175572] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.926 [2024-12-06 13:56:39.175637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.926 [2024-12-06 13:56:39.175657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:39.926 [2024-12-06 13:56:39.179976] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.926 [2024-12-06 13:56:39.180087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.926 [2024-12-06 13:56:39.180155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:39.926 [2024-12-06 13:56:39.184640] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.926 [2024-12-06 13:56:39.184723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.926 [2024-12-06 13:56:39.184743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:39.926 [2024-12-06 13:56:39.189091] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.926 [2024-12-06 13:56:39.189216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.926 [2024-12-06 13:56:39.189238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:39.926 [2024-12-06 13:56:39.193549] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.926 [2024-12-06 13:56:39.193632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.926 [2024-12-06 13:56:39.193652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:39.926 [2024-12-06 13:56:39.198076] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.926 [2024-12-06 13:56:39.198352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.926 [2024-12-06 13:56:39.198374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:39.926 [2024-12-06 13:56:39.202876] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.926 [2024-12-06 13:56:39.202957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.926 [2024-12-06 13:56:39.202977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:39.926 [2024-12-06 13:56:39.207467] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.926 [2024-12-06 13:56:39.207531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.926 [2024-12-06 13:56:39.207552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:39.926 [2024-12-06 13:56:39.211994] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.926 [2024-12-06 13:56:39.212098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.926 [2024-12-06 13:56:39.212135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:39.926 [2024-12-06 13:56:39.216554] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.926 [2024-12-06 13:56:39.216672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.926 [2024-12-06 13:56:39.216692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:39.926 [2024-12-06 13:56:39.220951] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.926 [2024-12-06 13:56:39.221070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.926 [2024-12-06 13:56:39.221091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:39.926 [2024-12-06 13:56:39.225573] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.926 [2024-12-06 13:56:39.225689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.926 [2024-12-06 13:56:39.225709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:39.926 [2024-12-06 13:56:39.230043] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.926 [2024-12-06 13:56:39.230300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.926 [2024-12-06 13:56:39.230322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:39.926 [2024-12-06 13:56:39.234799] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.926 [2024-12-06 13:56:39.234917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.926 [2024-12-06 13:56:39.234938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:39.926 [2024-12-06 13:56:39.239429] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.926 [2024-12-06 13:56:39.239522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.926 [2024-12-06 13:56:39.239542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:39.926 [2024-12-06 13:56:39.243884] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.927 [2024-12-06 13:56:39.243995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.927 [2024-12-06 13:56:39.244015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:39.927 [2024-12-06 13:56:39.248407] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.927 [2024-12-06 13:56:39.248471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.927 [2024-12-06 13:56:39.248490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:39.927 [2024-12-06 13:56:39.252883] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.927 [2024-12-06 13:56:39.253001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.927 [2024-12-06 13:56:39.253023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:39.927 [2024-12-06 13:56:39.257429] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.927 [2024-12-06 13:56:39.257492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.927 [2024-12-06 13:56:39.257516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:39.927 [2024-12-06 13:56:39.261828] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.927 [2024-12-06 13:56:39.262079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.927 [2024-12-06 13:56:39.262100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:39.927 [2024-12-06 13:56:39.266661] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.927 [2024-12-06 13:56:39.266764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.927 [2024-12-06 13:56:39.266784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:39.927 [2024-12-06 13:56:39.270824] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.927 [2024-12-06 13:56:39.270933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.927 [2024-12-06 13:56:39.270953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:39.927 [2024-12-06 13:56:39.275159] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.927 [2024-12-06 13:56:39.275231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.927 [2024-12-06 13:56:39.275252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:39.927 [2024-12-06 13:56:39.279697] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.927 [2024-12-06 13:56:39.279772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.927 [2024-12-06 13:56:39.279792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:39.927 [2024-12-06 13:56:39.284232] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.927 [2024-12-06 13:56:39.284345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.927 [2024-12-06 13:56:39.284366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:39.927 [2024-12-06 13:56:39.288623] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.927 [2024-12-06 13:56:39.288686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.927 [2024-12-06 13:56:39.288706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:39.927 [2024-12-06 13:56:39.293139] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.927 [2024-12-06 13:56:39.293260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.927 [2024-12-06 13:56:39.293280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:39.927 [2024-12-06 13:56:39.297611] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.927 [2024-12-06 13:56:39.297818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.927 [2024-12-06 13:56:39.297839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:39.927 [2024-12-06 13:56:39.302456] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.927 [2024-12-06 13:56:39.302722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.927 [2024-12-06 13:56:39.302743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:39.927 [2024-12-06 13:56:39.307149] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.927 [2024-12-06 13:56:39.307231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.927 [2024-12-06 13:56:39.307251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:39.927 [2024-12-06 13:56:39.311601] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.927 [2024-12-06 13:56:39.311665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.927 [2024-12-06 13:56:39.311684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:39.927 [2024-12-06 13:56:39.316027] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.927 [2024-12-06 13:56:39.316105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.927 [2024-12-06 13:56:39.316142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:39.927 [2024-12-06 13:56:39.320578] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.927 [2024-12-06 13:56:39.320681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.927 [2024-12-06 13:56:39.320702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:39.927 [2024-12-06 13:56:39.325015] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:39.927 [2024-12-06 13:56:39.325147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.927 [2024-12-06 13:56:39.325184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:40.187 [2024-12-06 13:56:39.329610] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.187 [2024-12-06 13:56:39.329833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.187 [2024-12-06 13:56:39.329854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:40.187 [2024-12-06 13:56:39.334325] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.187 [2024-12-06 13:56:39.334408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.187 [2024-12-06 13:56:39.334428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:40.187 [2024-12-06 13:56:39.338780] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.187 [2024-12-06 13:56:39.338884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.187 [2024-12-06 13:56:39.338905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:40.187 [2024-12-06 13:56:39.343401] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.187 [2024-12-06 13:56:39.343503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.187 [2024-12-06 13:56:39.343525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:40.187 [2024-12-06 13:56:39.347791] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.187 [2024-12-06 13:56:39.347856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.187 [2024-12-06 13:56:39.347878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:40.187 [2024-12-06 13:56:39.352621] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.187 [2024-12-06 13:56:39.352705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.187 [2024-12-06 13:56:39.352727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:40.187 [2024-12-06 13:56:39.357602] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.187 [2024-12-06 13:56:39.357726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.187 [2024-12-06 13:56:39.357747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:40.187 [2024-12-06 13:56:39.362532] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.187 [2024-12-06 13:56:39.362596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.187 [2024-12-06 13:56:39.362617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:40.187 [2024-12-06 13:56:39.367822] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.187 [2024-12-06 13:56:39.367909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.187 [2024-12-06 13:56:39.367931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:40.187 [2024-12-06 13:56:39.372828] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.187 [2024-12-06 13:56:39.373068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.187 [2024-12-06 13:56:39.373089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:40.187 [2024-12-06 13:56:39.378325] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.187 [2024-12-06 13:56:39.378396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.187 [2024-12-06 13:56:39.378418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:40.187 [2024-12-06 13:56:39.383285] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.187 [2024-12-06 13:56:39.383392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.187 [2024-12-06 13:56:39.383414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:40.187 [2024-12-06 13:56:39.388097] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.187 [2024-12-06 13:56:39.388200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.187 [2024-12-06 13:56:39.388222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:40.187 [2024-12-06 13:56:39.392896] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.187 [2024-12-06 13:56:39.393083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.187 [2024-12-06 13:56:39.393105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:40.187 [2024-12-06 13:56:39.398097] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.187 [2024-12-06 13:56:39.398398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.187 [2024-12-06 13:56:39.398742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:40.187 [2024-12-06 13:56:39.403281] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.187 [2024-12-06 13:56:39.403541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.187 [2024-12-06 13:56:39.403794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:40.188 [2024-12-06 13:56:39.408147] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.188 [2024-12-06 13:56:39.408400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.188 [2024-12-06 13:56:39.408602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:40.188 [2024-12-06 13:56:39.412838] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.188 [2024-12-06 13:56:39.413073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.188 [2024-12-06 13:56:39.413363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:40.188 [2024-12-06 13:56:39.417848] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.188 [2024-12-06 13:56:39.418062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.188 [2024-12-06 13:56:39.418374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:40.188 [2024-12-06 13:56:39.422786] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.188 [2024-12-06 13:56:39.422997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.188 [2024-12-06 13:56:39.423345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:40.188 [2024-12-06 13:56:39.427732] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.188 [2024-12-06 13:56:39.427958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.188 [2024-12-06 13:56:39.428109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:40.188 [2024-12-06 13:56:39.432796] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.188 [2024-12-06 13:56:39.433068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.188 [2024-12-06 13:56:39.433465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:40.188 [2024-12-06 13:56:39.437821] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.188 [2024-12-06 13:56:39.438044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.188 [2024-12-06 13:56:39.438202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:40.188 [2024-12-06 13:56:39.442857] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.188 [2024-12-06 13:56:39.443123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.188 [2024-12-06 13:56:39.443151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:40.188 [2024-12-06 13:56:39.447974] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.188 [2024-12-06 13:56:39.448100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.188 [2024-12-06 13:56:39.448122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:40.188 [2024-12-06 13:56:39.452933] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.188 [2024-12-06 13:56:39.453188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.188 [2024-12-06 13:56:39.453225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:40.188 [2024-12-06 13:56:39.458166] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.188 [2024-12-06 13:56:39.458396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.188 [2024-12-06 13:56:39.458596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:40.188 [2024-12-06 13:56:39.463315] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.188 [2024-12-06 13:56:39.463644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.188 [2024-12-06 13:56:39.463896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:40.188 [2024-12-06 13:56:39.468667] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.188 [2024-12-06 13:56:39.468895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.188 [2024-12-06 13:56:39.469121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:40.188 [2024-12-06 13:56:39.473673] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.188 [2024-12-06 13:56:39.473935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.188 [2024-12-06 13:56:39.474124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:40.188 [2024-12-06 13:56:39.478635] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.188 [2024-12-06 13:56:39.478860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.188 [2024-12-06 13:56:39.479018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:40.188 [2024-12-06 13:56:39.483955] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.188 [2024-12-06 13:56:39.484223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.188 [2024-12-06 13:56:39.484392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:40.188 [2024-12-06 13:56:39.489071] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.188 [2024-12-06 13:56:39.489319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.188 [2024-12-06 13:56:39.489585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:40.188 [2024-12-06 13:56:39.493914] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.188 [2024-12-06 13:56:39.494165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.188 [2024-12-06 13:56:39.494412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:40.188 [2024-12-06 13:56:39.498651] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.188 [2024-12-06 13:56:39.498841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.188 [2024-12-06 13:56:39.498863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:40.188 [2024-12-06 13:56:39.503791] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.188 [2024-12-06 13:56:39.503908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.188 [2024-12-06 13:56:39.503945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:40.188 [2024-12-06 13:56:39.508541] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.188 [2024-12-06 13:56:39.508730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.188 [2024-12-06 13:56:39.508752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:40.188 [2024-12-06 13:56:39.513571] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.188 [2024-12-06 13:56:39.513686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.188 [2024-12-06 13:56:39.513707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:40.188 [2024-12-06 13:56:39.518472] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.188 [2024-12-06 13:56:39.518554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.188 [2024-12-06 13:56:39.518575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:40.188 [2024-12-06 13:56:39.523258] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.188 [2024-12-06 13:56:39.523375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.188 [2024-12-06 13:56:39.523396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:40.188 [2024-12-06 13:56:39.528084] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.188 [2024-12-06 13:56:39.528191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.188 [2024-12-06 13:56:39.528215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:40.188 [2024-12-06 13:56:39.532962] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.188 [2024-12-06 13:56:39.533053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.188 [2024-12-06 13:56:39.533073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:40.188 [2024-12-06 13:56:39.537848] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.188 [2024-12-06 13:56:39.537913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.188 [2024-12-06 13:56:39.537933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:40.189 [2024-12-06 13:56:39.542949] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.189 [2024-12-06 13:56:39.543192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.189 [2024-12-06 13:56:39.543215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:40.189 [2024-12-06 13:56:39.548249] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.189 [2024-12-06 13:56:39.548335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.189 [2024-12-06 13:56:39.548358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:40.189 [2024-12-06 13:56:39.553044] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.189 [2024-12-06 13:56:39.553209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.189 [2024-12-06 13:56:39.553231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:40.189 [2024-12-06 13:56:39.557895] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.189 [2024-12-06 13:56:39.558005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.189 [2024-12-06 13:56:39.558025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:40.189 [2024-12-06 13:56:39.562615] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.189 [2024-12-06 13:56:39.562772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.189 [2024-12-06 13:56:39.562794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:40.189 [2024-12-06 13:56:39.567372] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.189 [2024-12-06 13:56:39.567481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.189 [2024-12-06 13:56:39.567503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:40.189 [2024-12-06 13:56:39.571964] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.189 [2024-12-06 13:56:39.572030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.189 [2024-12-06 13:56:39.572050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:40.189 [2024-12-06 13:56:39.576708] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.189 [2024-12-06 13:56:39.576811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.189 [2024-12-06 13:56:39.576831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:40.189 [2024-12-06 13:56:39.581328] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.189 [2024-12-06 13:56:39.581395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.189 [2024-12-06 13:56:39.581417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:40.189 [2024-12-06 13:56:39.586086] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.189 [2024-12-06 13:56:39.586196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.189 [2024-12-06 13:56:39.586232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:40.450 [2024-12-06 13:56:39.590810] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.450 [2024-12-06 13:56:39.591010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.450 [2024-12-06 13:56:39.591031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:40.450 [2024-12-06 13:56:39.595801] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.450 [2024-12-06 13:56:39.595935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.450 [2024-12-06 13:56:39.595957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:40.450 [2024-12-06 13:56:39.600683] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.450 [2024-12-06 13:56:39.600788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.450 [2024-12-06 13:56:39.600809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:40.450 [2024-12-06 13:56:39.605040] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.450 [2024-12-06 13:56:39.605118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.450 [2024-12-06 13:56:39.605155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:40.450 [2024-12-06 13:56:39.609500] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.450 [2024-12-06 13:56:39.609583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.450 [2024-12-06 13:56:39.609603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:40.450 [2024-12-06 13:56:39.614334] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.450 [2024-12-06 13:56:39.614410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.450 [2024-12-06 13:56:39.614430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:40.450 [2024-12-06 13:56:39.618850] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.450 [2024-12-06 13:56:39.618932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.450 [2024-12-06 13:56:39.618952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:40.450 [2024-12-06 13:56:39.623792] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.450 [2024-12-06 13:56:39.623880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.450 [2024-12-06 13:56:39.623900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:40.450 [2024-12-06 13:56:39.628552] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.450 [2024-12-06 13:56:39.628620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.450 [2024-12-06 13:56:39.628642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:40.450 [2024-12-06 13:56:39.633269] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.450 [2024-12-06 13:56:39.633349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.450 [2024-12-06 13:56:39.633369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:40.450 [2024-12-06 13:56:39.637841] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.450 [2024-12-06 13:56:39.637923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.450 [2024-12-06 13:56:39.637943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:40.450 [2024-12-06 13:56:39.642623] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.450 [2024-12-06 13:56:39.642771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.450 [2024-12-06 13:56:39.642794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:40.450 [2024-12-06 13:56:39.647268] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.450 [2024-12-06 13:56:39.647408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.450 [2024-12-06 13:56:39.647429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:40.450 [2024-12-06 13:56:39.651833] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.450 [2024-12-06 13:56:39.651965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.450 [2024-12-06 13:56:39.651985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:40.450 [2024-12-06 13:56:39.656687] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.450 [2024-12-06 13:56:39.656888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.450 [2024-12-06 13:56:39.656909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:40.450 [2024-12-06 13:56:39.661656] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.450 [2024-12-06 13:56:39.661767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.450 [2024-12-06 13:56:39.661787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:40.450 [2024-12-06 13:56:39.666370] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.450 [2024-12-06 13:56:39.666458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.450 [2024-12-06 13:56:39.666478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:40.450 [2024-12-06 13:56:39.671033] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.450 [2024-12-06 13:56:39.671098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.450 [2024-12-06 13:56:39.671133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:40.450 [2024-12-06 13:56:39.675847] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.450 [2024-12-06 13:56:39.676056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.450 [2024-12-06 13:56:39.676078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:40.450 [2024-12-06 13:56:39.680784] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.450 [2024-12-06 13:56:39.680894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.450 [2024-12-06 13:56:39.680915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:40.450 [2024-12-06 13:56:39.685579] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.450 [2024-12-06 13:56:39.685678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.450 [2024-12-06 13:56:39.685701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:40.450 [2024-12-06 13:56:39.690389] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.450 [2024-12-06 13:56:39.690453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.450 [2024-12-06 13:56:39.690474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:40.450 [2024-12-06 13:56:39.695105] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.450 [2024-12-06 13:56:39.695224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.450 [2024-12-06 13:56:39.695245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:40.450 [2024-12-06 13:56:39.699801] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.450 [2024-12-06 13:56:39.700016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.450 [2024-12-06 13:56:39.700037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:40.450 [2024-12-06 13:56:39.704701] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.450 [2024-12-06 13:56:39.704818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.451 [2024-12-06 13:56:39.704840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:40.451 [2024-12-06 13:56:39.709340] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.451 [2024-12-06 13:56:39.709404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.451 [2024-12-06 13:56:39.709424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:40.451 [2024-12-06 13:56:39.713673] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.451 [2024-12-06 13:56:39.713797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.451 [2024-12-06 13:56:39.713818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:40.451 [2024-12-06 13:56:39.718166] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.451 [2024-12-06 13:56:39.718304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.451 [2024-12-06 13:56:39.718342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:40.451 [2024-12-06 13:56:39.722934] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.451 [2024-12-06 13:56:39.723010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.451 [2024-12-06 13:56:39.723029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:40.451 [2024-12-06 13:56:39.727577] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.451 [2024-12-06 13:56:39.727659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.451 [2024-12-06 13:56:39.727679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:40.451 [2024-12-06 13:56:39.732233] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.451 [2024-12-06 13:56:39.732388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.451 [2024-12-06 13:56:39.732408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:40.451 [2024-12-06 13:56:39.736942] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.451 [2024-12-06 13:56:39.737051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.451 [2024-12-06 13:56:39.737088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:40.451 [2024-12-06 13:56:39.741658] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.451 [2024-12-06 13:56:39.741765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.451 [2024-12-06 13:56:39.741784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:40.451 [2024-12-06 13:56:39.746297] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.451 [2024-12-06 13:56:39.746480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.451 [2024-12-06 13:56:39.746551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:40.451 [2024-12-06 13:56:39.750828] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.451 [2024-12-06 13:56:39.750977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.451 [2024-12-06 13:56:39.750998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:40.451 [2024-12-06 13:56:39.755617] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.451 [2024-12-06 13:56:39.755839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.451 [2024-12-06 13:56:39.755893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:40.451 [2024-12-06 13:56:39.760611] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.451 [2024-12-06 13:56:39.760674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.451 [2024-12-06 13:56:39.760695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:40.451 [2024-12-06 13:56:39.765293] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.451 [2024-12-06 13:56:39.765386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.451 [2024-12-06 13:56:39.765408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:40.451 [2024-12-06 13:56:39.769817] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.451 [2024-12-06 13:56:39.769921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.451 [2024-12-06 13:56:39.769941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:40.451 [2024-12-06 13:56:39.774870] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.451 [2024-12-06 13:56:39.775205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.451 [2024-12-06 13:56:39.775228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:40.451 [2024-12-06 13:56:39.780275] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.451 [2024-12-06 13:56:39.780378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.451 [2024-12-06 13:56:39.780430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:40.451 [2024-12-06 13:56:39.785336] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.451 [2024-12-06 13:56:39.785450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.451 [2024-12-06 13:56:39.785473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:40.451 [2024-12-06 13:56:39.790350] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.451 [2024-12-06 13:56:39.790459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.451 [2024-12-06 13:56:39.790480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:40.451 [2024-12-06 13:56:39.795394] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.451 [2024-12-06 13:56:39.795472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.451 [2024-12-06 13:56:39.795494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:40.451 [2024-12-06 13:56:39.800454] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.451 [2024-12-06 13:56:39.800572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.451 [2024-12-06 13:56:39.800594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:40.451 [2024-12-06 13:56:39.805440] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.451 [2024-12-06 13:56:39.805529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.451 [2024-12-06 13:56:39.805567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:40.451 [2024-12-06 13:56:39.810260] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.451 [2024-12-06 13:56:39.810346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.451 [2024-12-06 13:56:39.810368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:40.451 [2024-12-06 13:56:39.815212] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.451 [2024-12-06 13:56:39.815310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.451 [2024-12-06 13:56:39.815357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:40.451 [2024-12-06 13:56:39.820107] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.451 [2024-12-06 13:56:39.820279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.451 [2024-12-06 13:56:39.820318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:40.451 [2024-12-06 13:56:39.824660] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.451 [2024-12-06 13:56:39.824735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.451 [2024-12-06 13:56:39.824756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:40.451 [2024-12-06 13:56:39.829205] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.451 [2024-12-06 13:56:39.829304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.451 [2024-12-06 13:56:39.829325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:40.451 [2024-12-06 13:56:39.833899] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.451 [2024-12-06 13:56:39.833963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.452 [2024-12-06 13:56:39.833984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:40.452 [2024-12-06 13:56:39.838711] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.452 [2024-12-06 13:56:39.838797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.452 [2024-12-06 13:56:39.838817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:40.452 [2024-12-06 13:56:39.843607] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.452 [2024-12-06 13:56:39.843706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.452 [2024-12-06 13:56:39.843726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:40.452 [2024-12-06 13:56:39.848257] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.452 [2024-12-06 13:56:39.848363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.452 [2024-12-06 13:56:39.848384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:40.711 [2024-12-06 13:56:39.852966] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.711 [2024-12-06 13:56:39.853033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.711 [2024-12-06 13:56:39.853055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:40.711 [2024-12-06 13:56:39.857789] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.711 [2024-12-06 13:56:39.857855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.711 [2024-12-06 13:56:39.857876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:40.711 [2024-12-06 13:56:39.862600] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.711 [2024-12-06 13:56:39.862664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.711 [2024-12-06 13:56:39.862685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:40.711 [2024-12-06 13:56:39.867178] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.711 [2024-12-06 13:56:39.867284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.711 [2024-12-06 13:56:39.867305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:40.711 [2024-12-06 13:56:39.871833] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.711 [2024-12-06 13:56:39.871942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.711 [2024-12-06 13:56:39.871963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:40.711 [2024-12-06 13:56:39.876548] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.711 [2024-12-06 13:56:39.876648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.711 [2024-12-06 13:56:39.876669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:40.711 [2024-12-06 13:56:39.881236] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.711 [2024-12-06 13:56:39.881320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.711 [2024-12-06 13:56:39.881341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:40.711 [2024-12-06 13:56:39.885946] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.711 [2024-12-06 13:56:39.886012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.711 [2024-12-06 13:56:39.886032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:40.711 [2024-12-06 13:56:39.890674] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.711 [2024-12-06 13:56:39.890902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.712 [2024-12-06 13:56:39.890941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:40.712 [2024-12-06 13:56:39.895614] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.712 [2024-12-06 13:56:39.895700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.712 [2024-12-06 13:56:39.895721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:40.712 [2024-12-06 13:56:39.900228] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.712 [2024-12-06 13:56:39.900312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.712 [2024-12-06 13:56:39.900332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:40.712 [2024-12-06 13:56:39.904984] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.712 [2024-12-06 13:56:39.905048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.712 [2024-12-06 13:56:39.905069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:40.712 [2024-12-06 13:56:39.909820] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.712 [2024-12-06 13:56:39.909892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.712 [2024-12-06 13:56:39.909912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:40.712 [2024-12-06 13:56:39.914602] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ea3910) with pdu=0x200016eff3c8 00:17:40.712 [2024-12-06 13:56:39.914827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.712 [2024-12-06 13:56:39.914849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:40.712 6481.50 IOPS, 810.19 MiB/s 00:17:40.712 Latency(us) 00:17:40.712 [2024-12-06T13:56:40.116Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:40.712 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:17:40.712 nvme0n1 : 2.00 6480.71 810.09 0.00 0.00 2463.41 1526.69 8877.15 00:17:40.712 [2024-12-06T13:56:40.116Z] =================================================================================================================== 00:17:40.712 [2024-12-06T13:56:40.116Z] Total : 6480.71 810.09 0.00 0.00 2463.41 1526.69 8877.15 00:17:40.712 { 00:17:40.712 "results": [ 00:17:40.712 { 00:17:40.712 "job": "nvme0n1", 00:17:40.712 "core_mask": "0x2", 00:17:40.712 "workload": "randwrite", 00:17:40.712 "status": "finished", 00:17:40.712 "queue_depth": 16, 00:17:40.712 "io_size": 131072, 00:17:40.712 "runtime": 2.0041, 00:17:40.712 "iops": 6480.714535202834, 00:17:40.712 "mibps": 810.0893169003542, 00:17:40.712 "io_failed": 0, 00:17:40.712 "io_timeout": 0, 00:17:40.712 "avg_latency_us": 2463.4133796231486, 00:17:40.712 "min_latency_us": 1526.6909090909091, 00:17:40.712 "max_latency_us": 8877.149090909092 00:17:40.712 } 00:17:40.712 ], 00:17:40.712 "core_count": 1 00:17:40.712 } 00:17:40.712 13:56:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:40.712 13:56:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:40.712 13:56:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:40.712 13:56:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:40.712 | .driver_specific 00:17:40.712 | .nvme_error 00:17:40.712 | .status_code 00:17:40.712 | .command_transient_transport_error' 00:17:40.971 13:56:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 419 > 0 )) 00:17:40.971 13:56:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80360 00:17:40.971 13:56:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80360 ']' 00:17:40.971 13:56:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80360 00:17:40.971 13:56:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:17:40.971 13:56:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:40.971 13:56:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80360 00:17:40.971 killing process with pid 80360 00:17:40.971 Received shutdown signal, test time was about 2.000000 seconds 00:17:40.971 00:17:40.971 Latency(us) 00:17:40.971 [2024-12-06T13:56:40.375Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:40.971 [2024-12-06T13:56:40.375Z] =================================================================================================================== 00:17:40.971 [2024-12-06T13:56:40.375Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:40.971 13:56:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:40.971 13:56:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:40.971 13:56:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80360' 00:17:40.971 13:56:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80360 00:17:40.971 13:56:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80360 00:17:41.230 13:56:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 80155 00:17:41.230 13:56:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80155 ']' 00:17:41.231 13:56:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80155 00:17:41.231 13:56:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:17:41.231 13:56:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:41.231 13:56:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80155 00:17:41.231 killing process with pid 80155 00:17:41.231 13:56:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:41.231 13:56:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:41.231 13:56:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80155' 00:17:41.231 13:56:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80155 00:17:41.231 13:56:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80155 00:17:41.490 ************************************ 00:17:41.490 END TEST nvmf_digest_error 00:17:41.490 ************************************ 00:17:41.490 00:17:41.490 real 0m17.963s 00:17:41.490 user 0m34.692s 00:17:41.490 sys 0m4.944s 00:17:41.490 13:56:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:41.490 13:56:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:41.490 13:56:40 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:17:41.490 13:56:40 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:17:41.490 13:56:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:41.490 13:56:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:17:41.490 13:56:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:41.490 13:56:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:17:41.490 13:56:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:41.490 13:56:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:41.490 rmmod nvme_tcp 00:17:41.490 rmmod nvme_fabrics 00:17:41.490 rmmod nvme_keyring 00:17:41.490 13:56:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:41.490 13:56:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:17:41.490 13:56:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:17:41.490 13:56:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 80155 ']' 00:17:41.490 13:56:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 80155 00:17:41.490 13:56:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 80155 ']' 00:17:41.490 Process with pid 80155 is not found 00:17:41.490 13:56:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 80155 00:17:41.490 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (80155) - No such process 00:17:41.490 13:56:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 80155 is not found' 00:17:41.490 13:56:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:41.490 13:56:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:41.490 13:56:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:41.490 13:56:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:17:41.490 13:56:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:41.490 13:56:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:17:41.490 13:56:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:17:41.490 13:56:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:41.490 13:56:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:41.490 13:56:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:41.490 13:56:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:41.490 13:56:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:41.749 13:56:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:41.749 13:56:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:41.749 13:56:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:41.749 13:56:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:41.750 13:56:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:41.750 13:56:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:41.750 13:56:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:41.750 13:56:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:41.750 13:56:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:41.750 13:56:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:41.750 13:56:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:41.750 13:56:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:41.750 13:56:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:41.750 13:56:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:41.750 13:56:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@300 -- # return 0 00:17:41.750 00:17:41.750 real 0m34.517s 00:17:41.750 user 1m4.834s 00:17:41.750 sys 0m10.032s 00:17:41.750 13:56:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:41.750 ************************************ 00:17:41.750 END TEST nvmf_digest 00:17:41.750 ************************************ 00:17:41.750 13:56:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:17:41.750 13:56:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:17:41.750 13:56:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:17:41.750 13:56:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:17:41.750 13:56:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:41.750 13:56:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:41.750 13:56:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.010 ************************************ 00:17:42.010 START TEST nvmf_host_multipath 00:17:42.010 ************************************ 00:17:42.010 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:17:42.010 * Looking for test storage... 00:17:42.010 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:42.010 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:42.010 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:17:42.010 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:42.010 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:42.010 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:42.010 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:42.010 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:42.010 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:17:42.010 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:17:42.010 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:17:42.010 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:17:42.010 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:17:42.010 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:17:42.010 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:17:42.010 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:42.010 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in 00:17:42.010 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1 00:17:42.010 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:42.010 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:42.010 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1 00:17:42.010 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1 00:17:42.010 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:42.010 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1 00:17:42.010 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:17:42.010 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2 00:17:42.010 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2 00:17:42.010 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:42.010 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2 00:17:42.010 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:17:42.010 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:42.010 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:42.010 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0 00:17:42.010 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:42.010 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:42.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:42.010 --rc genhtml_branch_coverage=1 00:17:42.010 --rc genhtml_function_coverage=1 00:17:42.010 --rc genhtml_legend=1 00:17:42.010 --rc geninfo_all_blocks=1 00:17:42.011 --rc geninfo_unexecuted_blocks=1 00:17:42.011 00:17:42.011 ' 00:17:42.011 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:42.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:42.011 --rc genhtml_branch_coverage=1 00:17:42.011 --rc genhtml_function_coverage=1 00:17:42.011 --rc genhtml_legend=1 00:17:42.011 --rc geninfo_all_blocks=1 00:17:42.011 --rc geninfo_unexecuted_blocks=1 00:17:42.011 00:17:42.011 ' 00:17:42.011 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:42.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:42.011 --rc genhtml_branch_coverage=1 00:17:42.011 --rc genhtml_function_coverage=1 00:17:42.011 --rc genhtml_legend=1 00:17:42.011 --rc geninfo_all_blocks=1 00:17:42.011 --rc geninfo_unexecuted_blocks=1 00:17:42.011 00:17:42.011 ' 00:17:42.011 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:42.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:42.011 --rc genhtml_branch_coverage=1 00:17:42.011 --rc genhtml_function_coverage=1 00:17:42.011 --rc genhtml_legend=1 00:17:42.011 --rc geninfo_all_blocks=1 00:17:42.011 --rc geninfo_unexecuted_blocks=1 00:17:42.011 00:17:42.011 ' 00:17:42.011 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:42.011 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:17:42.011 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:42.011 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:42.011 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:42.011 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:42.011 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:42.011 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:42.011 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:42.011 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:42.011 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:42.011 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:42.011 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:17:42.011 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=cfa2def7-c8af-457f-82a0-b312efdea7f4 00:17:42.011 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:42.011 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:42.011 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:42.011 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:42.011 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:42.011 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:17:42.011 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:42.011 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:42.011 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:42.011 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.011 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.011 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.011 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:17:42.011 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.011 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # : 0 00:17:42.011 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:42.011 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:42.011 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:42.011 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:42.011 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:42.011 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:42.011 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:42.011 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:42.011 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:42.011 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:42.011 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:42.011 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:42.011 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:42.011 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:17:42.011 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:42.011 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:17:42.011 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:17:42.011 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:42.011 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:42.011 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:42.012 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:42.012 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:42.012 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:42.012 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:42.012 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:42.012 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:42.012 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:42.012 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:42.012 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:42.012 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:42.012 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:42.012 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:42.012 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:42.012 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:42.012 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:42.012 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:42.012 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:42.012 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:42.012 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:42.012 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:42.012 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:42.012 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:42.012 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:42.012 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:42.012 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:42.012 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:42.012 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:42.012 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:42.012 Cannot find device "nvmf_init_br" 00:17:42.012 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:17:42.012 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:42.012 Cannot find device "nvmf_init_br2" 00:17:42.012 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:17:42.012 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:42.012 Cannot find device "nvmf_tgt_br" 00:17:42.012 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # true 00:17:42.012 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:42.012 Cannot find device "nvmf_tgt_br2" 00:17:42.012 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # true 00:17:42.012 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:42.272 Cannot find device "nvmf_init_br" 00:17:42.272 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # true 00:17:42.272 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:42.272 Cannot find device "nvmf_init_br2" 00:17:42.272 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # true 00:17:42.272 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:42.272 Cannot find device "nvmf_tgt_br" 00:17:42.272 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # true 00:17:42.272 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:42.272 Cannot find device "nvmf_tgt_br2" 00:17:42.272 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # true 00:17:42.272 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:42.272 Cannot find device "nvmf_br" 00:17:42.272 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # true 00:17:42.272 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:42.272 Cannot find device "nvmf_init_if" 00:17:42.272 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # true 00:17:42.272 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:42.272 Cannot find device "nvmf_init_if2" 00:17:42.272 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # true 00:17:42.272 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:42.272 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:42.272 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # true 00:17:42.272 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:42.272 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:42.272 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # true 00:17:42.272 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:42.272 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:42.272 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:42.272 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:42.272 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:42.272 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:42.272 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:42.272 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:42.272 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:42.272 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:42.272 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:42.272 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:42.272 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:42.272 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:42.272 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:42.272 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:42.272 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:42.272 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:42.532 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:42.532 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:42.532 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:42.532 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:42.532 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:42.532 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:42.532 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:42.532 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:42.532 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:42.532 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:42.532 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:42.532 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:42.532 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:42.532 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:42.532 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:42.532 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:42.532 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:17:42.532 00:17:42.532 --- 10.0.0.3 ping statistics --- 00:17:42.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:42.532 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:17:42.532 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:42.532 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:42.532 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:17:42.532 00:17:42.532 --- 10.0.0.4 ping statistics --- 00:17:42.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:42.533 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:17:42.533 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:42.533 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:42.533 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:17:42.533 00:17:42.533 --- 10.0.0.1 ping statistics --- 00:17:42.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:42.533 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:17:42.533 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:42.533 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:42.533 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:17:42.533 00:17:42.533 --- 10.0.0.2 ping statistics --- 00:17:42.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:42.533 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:17:42.533 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:42.533 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@461 -- # return 0 00:17:42.533 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:42.533 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:42.533 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:42.533 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:42.533 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:42.533 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:42.533 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:42.533 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:17:42.533 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:42.533 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:42.533 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:42.533 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@509 -- # nvmfpid=80676 00:17:42.533 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:17:42.533 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@510 -- # waitforlisten 80676 00:17:42.533 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 80676 ']' 00:17:42.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:42.533 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:42.533 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:42.533 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:42.533 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:42.533 13:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:42.533 [2024-12-06 13:56:41.877395] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:17:42.533 [2024-12-06 13:56:41.877519] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:42.792 [2024-12-06 13:56:42.024361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:42.792 [2024-12-06 13:56:42.075256] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:42.792 [2024-12-06 13:56:42.075601] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:42.792 [2024-12-06 13:56:42.075771] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:42.792 [2024-12-06 13:56:42.075916] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:42.792 [2024-12-06 13:56:42.075968] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:42.792 [2024-12-06 13:56:42.077317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:42.792 [2024-12-06 13:56:42.077326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:42.792 [2024-12-06 13:56:42.133918] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:43.052 13:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:43.052 13:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:17:43.052 13:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:43.052 13:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:43.053 13:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:43.053 13:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:43.053 13:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=80676 00:17:43.053 13:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:43.312 [2024-12-06 13:56:42.518242] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:43.312 13:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:17:43.571 Malloc0 00:17:43.571 13:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:17:43.831 13:56:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:44.090 13:56:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:44.350 [2024-12-06 13:56:43.655823] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:44.350 13:56:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:17:44.621 [2024-12-06 13:56:43.875903] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:17:44.621 13:56:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:17:44.621 13:56:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=80720 00:17:44.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:44.621 13:56:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:44.621 13:56:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 80720 /var/tmp/bdevperf.sock 00:17:44.621 13:56:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 80720 ']' 00:17:44.621 13:56:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:44.621 13:56:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:44.621 13:56:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:44.621 13:56:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:44.621 13:56:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:44.898 13:56:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:44.898 13:56:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:17:44.898 13:56:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:17:45.157 13:56:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:17:45.416 Nvme0n1 00:17:45.675 13:56:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:17:45.933 Nvme0n1 00:17:45.933 13:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:17:45.933 13:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:17:46.868 13:56:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:17:46.868 13:56:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:17:47.127 13:56:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:17:47.386 13:56:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:17:47.386 13:56:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80676 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:47.386 13:56:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80762 00:17:47.386 13:56:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:17:53.956 13:56:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:53.956 13:56:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:17:53.956 13:56:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:17:53.956 13:56:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:53.956 Attaching 4 probes... 00:17:53.956 @path[10.0.0.3, 4421]: 18846 00:17:53.956 @path[10.0.0.3, 4421]: 19300 00:17:53.956 @path[10.0.0.3, 4421]: 19246 00:17:53.956 @path[10.0.0.3, 4421]: 19310 00:17:53.956 @path[10.0.0.3, 4421]: 19349 00:17:53.956 13:56:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:53.956 13:56:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:17:53.956 13:56:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:17:53.956 13:56:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:17:53.956 13:56:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:17:53.956 13:56:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:17:53.956 13:56:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80762 00:17:53.956 13:56:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:53.956 13:56:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:17:53.956 13:56:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:17:53.956 13:56:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:17:54.214 13:56:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:17:54.214 13:56:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80877 00:17:54.214 13:56:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:17:54.214 13:56:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80676 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:00.798 13:56:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:00.798 13:56:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:18:00.798 13:56:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:18:00.798 13:56:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:00.799 Attaching 4 probes... 00:18:00.799 @path[10.0.0.3, 4420]: 19455 00:18:00.799 @path[10.0.0.3, 4420]: 19792 00:18:00.799 @path[10.0.0.3, 4420]: 19330 00:18:00.799 @path[10.0.0.3, 4420]: 18860 00:18:00.799 @path[10.0.0.3, 4420]: 19498 00:18:00.799 13:56:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:00.799 13:56:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:00.799 13:56:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:18:00.799 13:56:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:18:00.799 13:56:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:18:00.799 13:56:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:18:00.799 13:56:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80877 00:18:00.799 13:56:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:00.799 13:56:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:18:00.799 13:56:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:18:00.799 13:57:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:18:01.058 13:57:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:18:01.058 13:57:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80676 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:01.058 13:57:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80989 00:18:01.058 13:57:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:07.620 13:57:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:07.620 13:57:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:07.620 13:57:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:07.620 13:57:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:07.620 Attaching 4 probes... 00:18:07.620 @path[10.0.0.3, 4421]: 13162 00:18:07.620 @path[10.0.0.3, 4421]: 18692 00:18:07.620 @path[10.0.0.3, 4421]: 20409 00:18:07.620 @path[10.0.0.3, 4421]: 19791 00:18:07.620 @path[10.0.0.3, 4421]: 20262 00:18:07.620 13:57:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:07.620 13:57:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:18:07.620 13:57:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:07.620 13:57:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:07.620 13:57:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:07.620 13:57:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:07.620 13:57:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80989 00:18:07.620 13:57:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:07.620 13:57:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:18:07.620 13:57:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:18:07.620 13:57:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:18:07.878 13:57:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:18:07.878 13:57:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81107 00:18:07.878 13:57:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80676 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:07.878 13:57:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:14.465 13:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:14.465 13:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:18:14.465 13:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:18:14.465 13:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:14.465 Attaching 4 probes... 00:18:14.465 00:18:14.465 00:18:14.465 00:18:14.465 00:18:14.465 00:18:14.465 13:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:14.465 13:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:18:14.465 13:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:14.465 13:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:18:14.465 13:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:18:14.465 13:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:18:14.465 13:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81107 00:18:14.465 13:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:14.465 13:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:18:14.465 13:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:18:14.465 13:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:18:14.725 13:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:18:14.725 13:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80676 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:14.725 13:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81225 00:18:14.725 13:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:21.296 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:21.296 13:57:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:21.296 13:57:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:21.296 13:57:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:21.296 Attaching 4 probes... 00:18:21.296 @path[10.0.0.3, 4421]: 17322 00:18:21.296 @path[10.0.0.3, 4421]: 18723 00:18:21.296 @path[10.0.0.3, 4421]: 18972 00:18:21.296 @path[10.0.0.3, 4421]: 18255 00:18:21.296 @path[10.0.0.3, 4421]: 18866 00:18:21.296 13:57:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:21.296 13:57:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:18:21.296 13:57:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:21.296 13:57:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:21.296 13:57:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:21.296 13:57:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:21.296 13:57:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81225 00:18:21.296 13:57:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:21.296 13:57:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:18:21.296 13:57:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:18:22.233 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:18:22.233 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81343 00:18:22.233 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80676 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:22.233 13:57:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:28.797 13:57:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:28.797 13:57:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:18:28.797 13:57:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:18:28.797 13:57:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:28.797 Attaching 4 probes... 00:18:28.797 @path[10.0.0.3, 4420]: 19704 00:18:28.797 @path[10.0.0.3, 4420]: 19985 00:18:28.797 @path[10.0.0.3, 4420]: 19886 00:18:28.797 @path[10.0.0.3, 4420]: 20030 00:18:28.797 @path[10.0.0.3, 4420]: 20200 00:18:28.797 13:57:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:28.797 13:57:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:28.797 13:57:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:18:28.797 13:57:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:18:28.797 13:57:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:18:28.797 13:57:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:18:28.797 13:57:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81343 00:18:28.797 13:57:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:28.797 13:57:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:18:28.797 [2024-12-06 13:57:28.097395] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:18:28.798 13:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:18:29.056 13:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:18:35.620 13:57:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:18:35.620 13:57:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81523 00:18:35.620 13:57:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80676 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:35.620 13:57:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:42.217 13:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:42.217 13:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:42.217 13:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:42.217 13:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:42.217 Attaching 4 probes... 00:18:42.217 @path[10.0.0.3, 4421]: 18231 00:18:42.217 @path[10.0.0.3, 4421]: 18577 00:18:42.217 @path[10.0.0.3, 4421]: 18388 00:18:42.217 @path[10.0.0.3, 4421]: 18322 00:18:42.217 @path[10.0.0.3, 4421]: 18864 00:18:42.217 13:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:42.217 13:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:18:42.217 13:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:42.217 13:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:42.217 13:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:42.217 13:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:42.217 13:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81523 00:18:42.217 13:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:42.217 13:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 80720 00:18:42.217 13:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 80720 ']' 00:18:42.217 13:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 80720 00:18:42.217 13:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:18:42.217 13:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:42.217 13:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80720 00:18:42.217 killing process with pid 80720 00:18:42.217 13:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:42.217 13:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:42.217 13:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80720' 00:18:42.217 13:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 80720 00:18:42.217 13:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 80720 00:18:42.217 { 00:18:42.217 "results": [ 00:18:42.217 { 00:18:42.217 "job": "Nvme0n1", 00:18:42.217 "core_mask": "0x4", 00:18:42.217 "workload": "verify", 00:18:42.217 "status": "terminated", 00:18:42.217 "verify_range": { 00:18:42.217 "start": 0, 00:18:42.217 "length": 16384 00:18:42.217 }, 00:18:42.217 "queue_depth": 128, 00:18:42.217 "io_size": 4096, 00:18:42.217 "runtime": 55.511824, 00:18:42.217 "iops": 8160.2614967218515, 00:18:42.217 "mibps": 31.876021471569732, 00:18:42.217 "io_failed": 0, 00:18:42.217 "io_timeout": 0, 00:18:42.217 "avg_latency_us": 15657.792876045502, 00:18:42.217 "min_latency_us": 1012.829090909091, 00:18:42.217 "max_latency_us": 7046430.72 00:18:42.217 } 00:18:42.217 ], 00:18:42.217 "core_count": 1 00:18:42.217 } 00:18:42.217 13:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 80720 00:18:42.217 13:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:42.217 [2024-12-06 13:56:43.940414] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:18:42.217 [2024-12-06 13:56:43.940578] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80720 ] 00:18:42.217 [2024-12-06 13:56:44.089211] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:42.217 [2024-12-06 13:56:44.142551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:42.217 [2024-12-06 13:56:44.201343] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:42.217 Running I/O for 90 seconds... 00:18:42.217 7828.00 IOPS, 30.58 MiB/s [2024-12-06T13:57:41.621Z] 8514.00 IOPS, 33.26 MiB/s [2024-12-06T13:57:41.621Z] 8876.00 IOPS, 34.67 MiB/s [2024-12-06T13:57:41.621Z] 9073.25 IOPS, 35.44 MiB/s [2024-12-06T13:57:41.621Z] 9182.20 IOPS, 35.87 MiB/s [2024-12-06T13:57:41.621Z] 9259.33 IOPS, 36.17 MiB/s [2024-12-06T13:57:41.621Z] 9319.86 IOPS, 36.41 MiB/s [2024-12-06T13:57:41.621Z] 9345.50 IOPS, 36.51 MiB/s [2024-12-06T13:57:41.621Z] [2024-12-06 13:56:53.459945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:89376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.217 [2024-12-06 13:56:53.460001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:42.217 [2024-12-06 13:56:53.460066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.217 [2024-12-06 13:56:53.460086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:42.217 [2024-12-06 13:56:53.460106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:89392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.217 [2024-12-06 13:56:53.460133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:42.217 [2024-12-06 13:56:53.460154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:89400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.217 [2024-12-06 13:56:53.460168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:42.217 [2024-12-06 13:56:53.460187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:89408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.217 [2024-12-06 13:56:53.460200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:42.217 [2024-12-06 13:56:53.460219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.217 [2024-12-06 13:56:53.460232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:42.218 [2024-12-06 13:56:53.460251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:89424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.218 [2024-12-06 13:56:53.460264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:42.218 [2024-12-06 13:56:53.460283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:89432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.218 [2024-12-06 13:56:53.460297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:42.218 [2024-12-06 13:56:53.460315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:89440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.218 [2024-12-06 13:56:53.460328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:42.218 [2024-12-06 13:56:53.460346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:89448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.218 [2024-12-06 13:56:53.460391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:42.218 [2024-12-06 13:56:53.460413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:89456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.218 [2024-12-06 13:56:53.460427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:42.218 [2024-12-06 13:56:53.460445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:89464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.218 [2024-12-06 13:56:53.460459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:42.218 [2024-12-06 13:56:53.460477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:89472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.218 [2024-12-06 13:56:53.460490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:42.218 [2024-12-06 13:56:53.460508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:88992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.218 [2024-12-06 13:56:53.460522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:42.218 [2024-12-06 13:56:53.460557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:89000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.218 [2024-12-06 13:56:53.460571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:42.218 [2024-12-06 13:56:53.460589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:89008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.218 [2024-12-06 13:56:53.460603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:42.218 [2024-12-06 13:56:53.460623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:89016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.218 [2024-12-06 13:56:53.460637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:42.218 [2024-12-06 13:56:53.460656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:89024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.218 [2024-12-06 13:56:53.460670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:42.218 [2024-12-06 13:56:53.460689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:89032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.218 [2024-12-06 13:56:53.460702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:42.218 [2024-12-06 13:56:53.460721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:89040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.218 [2024-12-06 13:56:53.460744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:42.218 [2024-12-06 13:56:53.460763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:89048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.218 [2024-12-06 13:56:53.460776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:42.218 [2024-12-06 13:56:53.460795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:89480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.218 [2024-12-06 13:56:53.460816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:42.218 [2024-12-06 13:56:53.460837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:89488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.218 [2024-12-06 13:56:53.460851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:42.218 [2024-12-06 13:56:53.460885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:89496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.218 [2024-12-06 13:56:53.460898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:42.218 [2024-12-06 13:56:53.460933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:89504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.218 [2024-12-06 13:56:53.460950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:42.218 [2024-12-06 13:56:53.460970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:89512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.218 [2024-12-06 13:56:53.460983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:42.218 [2024-12-06 13:56:53.461001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:89520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.218 [2024-12-06 13:56:53.461032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:42.218 [2024-12-06 13:56:53.461051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:89528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.218 [2024-12-06 13:56:53.461065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:42.218 [2024-12-06 13:56:53.461083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:89536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.218 [2024-12-06 13:56:53.461097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:42.218 [2024-12-06 13:56:53.461115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:89544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.218 [2024-12-06 13:56:53.461130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:42.218 [2024-12-06 13:56:53.461161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:89552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.218 [2024-12-06 13:56:53.461178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.218 [2024-12-06 13:56:53.461198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:89560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.218 [2024-12-06 13:56:53.461212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:42.218 [2024-12-06 13:56:53.461231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:89568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.218 [2024-12-06 13:56:53.461246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:42.218 [2024-12-06 13:56:53.461265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.218 [2024-12-06 13:56:53.461278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:42.218 [2024-12-06 13:56:53.461306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:89584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.218 [2024-12-06 13:56:53.461321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:42.218 [2024-12-06 13:56:53.461340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:89592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.218 [2024-12-06 13:56:53.461353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:42.218 [2024-12-06 13:56:53.461372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:89600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.218 [2024-12-06 13:56:53.461385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:42.218 [2024-12-06 13:56:53.461404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:89608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.218 [2024-12-06 13:56:53.461417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:42.218 [2024-12-06 13:56:53.461436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:89616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.218 [2024-12-06 13:56:53.461450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:42.218 [2024-12-06 13:56:53.461468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:89624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.218 [2024-12-06 13:56:53.461482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:42.218 [2024-12-06 13:56:53.461501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:89632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.218 [2024-12-06 13:56:53.461531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:42.218 [2024-12-06 13:56:53.461567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:89056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.218 [2024-12-06 13:56:53.461581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:42.218 [2024-12-06 13:56:53.461601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:89064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.218 [2024-12-06 13:56:53.461615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:42.218 [2024-12-06 13:56:53.461635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:89072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.218 [2024-12-06 13:56:53.461650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:42.218 [2024-12-06 13:56:53.461670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:89080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.218 [2024-12-06 13:56:53.461684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:42.219 [2024-12-06 13:56:53.461704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:89088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.219 [2024-12-06 13:56:53.461719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:42.219 [2024-12-06 13:56:53.461746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:89096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.219 [2024-12-06 13:56:53.461761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:42.219 [2024-12-06 13:56:53.461783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:89104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.219 [2024-12-06 13:56:53.461798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:42.219 [2024-12-06 13:56:53.461818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:89112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.219 [2024-12-06 13:56:53.461833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:42.219 [2024-12-06 13:56:53.461867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:89640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.219 [2024-12-06 13:56:53.461896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:42.219 [2024-12-06 13:56:53.461915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:89648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.219 [2024-12-06 13:56:53.461928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:42.219 [2024-12-06 13:56:53.461947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:89656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.219 [2024-12-06 13:56:53.461961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:42.219 [2024-12-06 13:56:53.461980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:89664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.219 [2024-12-06 13:56:53.461993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:42.219 [2024-12-06 13:56:53.462012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:89672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.219 [2024-12-06 13:56:53.462025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:42.219 [2024-12-06 13:56:53.462044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:89680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.219 [2024-12-06 13:56:53.462058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:42.219 [2024-12-06 13:56:53.462076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:89688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.219 [2024-12-06 13:56:53.462090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:42.219 [2024-12-06 13:56:53.462109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:89696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.219 [2024-12-06 13:56:53.462123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:42.219 [2024-12-06 13:56:53.462142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:89704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.219 [2024-12-06 13:56:53.462164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:42.219 [2024-12-06 13:56:53.462186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:89712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.219 [2024-12-06 13:56:53.462207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:42.219 [2024-12-06 13:56:53.462227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:89720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.219 [2024-12-06 13:56:53.462241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:42.219 [2024-12-06 13:56:53.462260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:89728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.219 [2024-12-06 13:56:53.462274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:42.219 [2024-12-06 13:56:53.462293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:89736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.219 [2024-12-06 13:56:53.462308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:42.219 [2024-12-06 13:56:53.462326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:89744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.219 [2024-12-06 13:56:53.462340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:42.219 [2024-12-06 13:56:53.462359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:89752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.219 [2024-12-06 13:56:53.462372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:42.219 [2024-12-06 13:56:53.462396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:89760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.219 [2024-12-06 13:56:53.462410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:42.219 [2024-12-06 13:56:53.462429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:89768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.219 [2024-12-06 13:56:53.462443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:42.219 [2024-12-06 13:56:53.462461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:89776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.219 [2024-12-06 13:56:53.462475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:42.219 [2024-12-06 13:56:53.462494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:89120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.219 [2024-12-06 13:56:53.462507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:42.219 [2024-12-06 13:56:53.462526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:89128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.219 [2024-12-06 13:56:53.462556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:42.219 [2024-12-06 13:56:53.462576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:89136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.219 [2024-12-06 13:56:53.462590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:42.219 [2024-12-06 13:56:53.462609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:89144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.219 [2024-12-06 13:56:53.462629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:42.219 [2024-12-06 13:56:53.462650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:89152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.219 [2024-12-06 13:56:53.462665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:42.219 [2024-12-06 13:56:53.462684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:89160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.219 [2024-12-06 13:56:53.462698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:42.219 [2024-12-06 13:56:53.462717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:89168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.219 [2024-12-06 13:56:53.462731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:42.219 [2024-12-06 13:56:53.462750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:89176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.219 [2024-12-06 13:56:53.462765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:42.219 [2024-12-06 13:56:53.462784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:89784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.219 [2024-12-06 13:56:53.462798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:42.219 [2024-12-06 13:56:53.462817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:89792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.219 [2024-12-06 13:56:53.462832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:42.219 [2024-12-06 13:56:53.462851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:89800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.219 [2024-12-06 13:56:53.462866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:42.219 [2024-12-06 13:56:53.462900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.219 [2024-12-06 13:56:53.462914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:42.219 [2024-12-06 13:56:53.462933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:89816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.219 [2024-12-06 13:56:53.462947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:42.219 [2024-12-06 13:56:53.462984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:89824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.219 [2024-12-06 13:56:53.463002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:42.219 [2024-12-06 13:56:53.463021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:89832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.219 [2024-12-06 13:56:53.463035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:42.219 [2024-12-06 13:56:53.463054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:89840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.219 [2024-12-06 13:56:53.463067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:42.219 [2024-12-06 13:56:53.463094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:89848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.220 [2024-12-06 13:56:53.463108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:42.220 [2024-12-06 13:56:53.463140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:89856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.220 [2024-12-06 13:56:53.463157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:42.220 [2024-12-06 13:56:53.463177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:89864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.220 [2024-12-06 13:56:53.463191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:42.220 [2024-12-06 13:56:53.463210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:89872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.220 [2024-12-06 13:56:53.463223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:42.220 [2024-12-06 13:56:53.463242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:89880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.220 [2024-12-06 13:56:53.463256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:42.220 [2024-12-06 13:56:53.463275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:89888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.220 [2024-12-06 13:56:53.463288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:42.220 [2024-12-06 13:56:53.463308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:89896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.220 [2024-12-06 13:56:53.463331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:42.220 [2024-12-06 13:56:53.463354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:89904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.220 [2024-12-06 13:56:53.463375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:42.220 [2024-12-06 13:56:53.463394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:89912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.220 [2024-12-06 13:56:53.463408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:42.220 [2024-12-06 13:56:53.463427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:89920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.220 [2024-12-06 13:56:53.463440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:42.220 [2024-12-06 13:56:53.463459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.220 [2024-12-06 13:56:53.463479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:42.220 [2024-12-06 13:56:53.463499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.220 [2024-12-06 13:56:53.463513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:42.220 [2024-12-06 13:56:53.463543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:89192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.220 [2024-12-06 13:56:53.463558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:42.220 [2024-12-06 13:56:53.463577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:89200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.220 [2024-12-06 13:56:53.463591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:42.220 [2024-12-06 13:56:53.463610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:89208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.220 [2024-12-06 13:56:53.463623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:42.220 [2024-12-06 13:56:53.463643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:89216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.220 [2024-12-06 13:56:53.463656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:42.220 [2024-12-06 13:56:53.463677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:89224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.220 [2024-12-06 13:56:53.463693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:42.220 [2024-12-06 13:56:53.463711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:89232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.220 [2024-12-06 13:56:53.463725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:42.220 [2024-12-06 13:56:53.463744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:89240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.220 [2024-12-06 13:56:53.463758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:42.220 [2024-12-06 13:56:53.463777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:89248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.220 [2024-12-06 13:56:53.463791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:42.220 [2024-12-06 13:56:53.463810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:89256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.220 [2024-12-06 13:56:53.463824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:42.220 [2024-12-06 13:56:53.463843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:89264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.220 [2024-12-06 13:56:53.463857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:42.220 [2024-12-06 13:56:53.463876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:89272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.220 [2024-12-06 13:56:53.463890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:42.220 [2024-12-06 13:56:53.463909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:89280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.220 [2024-12-06 13:56:53.463922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:42.220 [2024-12-06 13:56:53.463941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:89288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.220 [2024-12-06 13:56:53.463960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:42.220 [2024-12-06 13:56:53.463980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:89296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.220 [2024-12-06 13:56:53.463995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:42.220 [2024-12-06 13:56:53.464013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:89304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.220 [2024-12-06 13:56:53.464032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:42.220 [2024-12-06 13:56:53.464051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:89312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.220 [2024-12-06 13:56:53.464065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:42.220 [2024-12-06 13:56:53.464084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:89320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.220 [2024-12-06 13:56:53.464107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:42.220 [2024-12-06 13:56:53.464129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:89328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.220 [2024-12-06 13:56:53.464143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:42.220 [2024-12-06 13:56:53.464162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:89336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.220 [2024-12-06 13:56:53.464176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:42.220 [2024-12-06 13:56:53.465723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:89344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.220 [2024-12-06 13:56:53.465752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:42.220 [2024-12-06 13:56:53.465778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:89352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.220 [2024-12-06 13:56:53.465794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:42.220 [2024-12-06 13:56:53.465829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:89360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.220 [2024-12-06 13:56:53.465843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:42.220 [2024-12-06 13:56:53.465862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:89368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.220 [2024-12-06 13:56:53.465876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:42.220 [2024-12-06 13:56:53.465895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:89936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.220 [2024-12-06 13:56:53.465909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:42.220 [2024-12-06 13:56:53.465928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:89944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.220 [2024-12-06 13:56:53.465952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:42.220 [2024-12-06 13:56:53.465986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:89952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.220 [2024-12-06 13:56:53.466000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:42.220 [2024-12-06 13:56:53.466020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:89960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.220 [2024-12-06 13:56:53.466034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:42.221 [2024-12-06 13:56:53.466053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:89968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.221 [2024-12-06 13:56:53.466066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:42.221 [2024-12-06 13:56:53.466085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:89976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.221 [2024-12-06 13:56:53.466099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:42.221 [2024-12-06 13:56:53.466118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:89984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.221 [2024-12-06 13:56:53.466146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:42.221 [2024-12-06 13:56:53.466182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:89992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.221 [2024-12-06 13:56:53.466200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:42.221 [2024-12-06 13:56:53.466220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:90000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.221 [2024-12-06 13:56:53.466234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:42.221 [2024-12-06 13:56:53.466253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:90008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.221 [2024-12-06 13:56:53.466267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:42.221 9351.67 IOPS, 36.53 MiB/s [2024-12-06T13:57:41.625Z] 9403.90 IOPS, 36.73 MiB/s [2024-12-06T13:57:41.625Z] 9449.18 IOPS, 36.91 MiB/s [2024-12-06T13:57:41.625Z] 9457.92 IOPS, 36.94 MiB/s [2024-12-06T13:57:41.625Z] 9472.38 IOPS, 37.00 MiB/s [2024-12-06T13:57:41.625Z] 9493.86 IOPS, 37.09 MiB/s [2024-12-06T13:57:41.625Z] [2024-12-06 13:57:00.077810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:76136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.221 [2024-12-06 13:57:00.077869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:42.221 [2024-12-06 13:57:00.077938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:76144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.221 [2024-12-06 13:57:00.077957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:42.221 [2024-12-06 13:57:00.077977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:76152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.221 [2024-12-06 13:57:00.077990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:42.221 [2024-12-06 13:57:00.078009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:76160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.221 [2024-12-06 13:57:00.078051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:42.221 [2024-12-06 13:57:00.078072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:76168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.221 [2024-12-06 13:57:00.078085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:42.221 [2024-12-06 13:57:00.078104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:75688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.221 [2024-12-06 13:57:00.078138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:42.221 [2024-12-06 13:57:00.078161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:75696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.221 [2024-12-06 13:57:00.078174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:42.221 [2024-12-06 13:57:00.078192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:75704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.221 [2024-12-06 13:57:00.078205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:42.221 [2024-12-06 13:57:00.078223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:75712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.221 [2024-12-06 13:57:00.078236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:42.221 [2024-12-06 13:57:00.078254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:75720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.221 [2024-12-06 13:57:00.078267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:42.221 [2024-12-06 13:57:00.078285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:75728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.221 [2024-12-06 13:57:00.078298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:42.221 [2024-12-06 13:57:00.078315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:75736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.221 [2024-12-06 13:57:00.078328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:42.221 [2024-12-06 13:57:00.078346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:75744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.221 [2024-12-06 13:57:00.078359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:42.221 [2024-12-06 13:57:00.078376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:76176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.221 [2024-12-06 13:57:00.078389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:42.221 [2024-12-06 13:57:00.078408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:76184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.221 [2024-12-06 13:57:00.078420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:42.221 [2024-12-06 13:57:00.078438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:76192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.221 [2024-12-06 13:57:00.078461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:42.221 [2024-12-06 13:57:00.078495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:76200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.221 [2024-12-06 13:57:00.078513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:42.221 [2024-12-06 13:57:00.078534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:76208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.221 [2024-12-06 13:57:00.078547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:42.221 [2024-12-06 13:57:00.078566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:76216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.221 [2024-12-06 13:57:00.078579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:42.221 [2024-12-06 13:57:00.078597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:76224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.221 [2024-12-06 13:57:00.078610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:42.221 [2024-12-06 13:57:00.078628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:76232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.221 [2024-12-06 13:57:00.078642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:42.221 [2024-12-06 13:57:00.078660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:76240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.221 [2024-12-06 13:57:00.078673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:42.222 [2024-12-06 13:57:00.078691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:76248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.222 [2024-12-06 13:57:00.078704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:42.222 [2024-12-06 13:57:00.078722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:76256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.222 [2024-12-06 13:57:00.078735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:42.222 [2024-12-06 13:57:00.078753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:76264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.222 [2024-12-06 13:57:00.078766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:42.222 [2024-12-06 13:57:00.078783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:76272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.222 [2024-12-06 13:57:00.078796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:42.222 [2024-12-06 13:57:00.078814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:76280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.222 [2024-12-06 13:57:00.078827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:42.222 [2024-12-06 13:57:00.078845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:76288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.222 [2024-12-06 13:57:00.078858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:42.222 [2024-12-06 13:57:00.078883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:76296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.222 [2024-12-06 13:57:00.078898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:42.222 [2024-12-06 13:57:00.078916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:76304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.222 [2024-12-06 13:57:00.078929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:42.222 [2024-12-06 13:57:00.078947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:76312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.222 [2024-12-06 13:57:00.078960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:42.222 [2024-12-06 13:57:00.078978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:76320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.222 [2024-12-06 13:57:00.078991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:42.222 [2024-12-06 13:57:00.079011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:75752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.222 [2024-12-06 13:57:00.079024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:42.222 [2024-12-06 13:57:00.079043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:75760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.222 [2024-12-06 13:57:00.079056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:42.222 [2024-12-06 13:57:00.079075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:75768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.222 [2024-12-06 13:57:00.079088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:42.222 [2024-12-06 13:57:00.079119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:75776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.222 [2024-12-06 13:57:00.079134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:42.222 [2024-12-06 13:57:00.079153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:75784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.222 [2024-12-06 13:57:00.079167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:42.222 [2024-12-06 13:57:00.079185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:75792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.222 [2024-12-06 13:57:00.079198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:42.222 [2024-12-06 13:57:00.079216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:75800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.222 [2024-12-06 13:57:00.079229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:42.222 [2024-12-06 13:57:00.079247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:75808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.222 [2024-12-06 13:57:00.079261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:42.222 [2024-12-06 13:57:00.079300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:76328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.222 [2024-12-06 13:57:00.079318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:42.222 [2024-12-06 13:57:00.079354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:76336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.222 [2024-12-06 13:57:00.079371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:42.222 [2024-12-06 13:57:00.079389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:76344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.222 [2024-12-06 13:57:00.079402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:42.222 [2024-12-06 13:57:00.079420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:76352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.223 [2024-12-06 13:57:00.079433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:42.223 [2024-12-06 13:57:00.079451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.223 [2024-12-06 13:57:00.079464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:42.223 [2024-12-06 13:57:00.079482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:76368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.223 [2024-12-06 13:57:00.079496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:42.223 [2024-12-06 13:57:00.079513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:76376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.223 [2024-12-06 13:57:00.079527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:42.223 [2024-12-06 13:57:00.079545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:76384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.223 [2024-12-06 13:57:00.079558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:42.223 [2024-12-06 13:57:00.079576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:76392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.223 [2024-12-06 13:57:00.079589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:42.223 [2024-12-06 13:57:00.079607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:76400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.223 [2024-12-06 13:57:00.079620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:42.223 [2024-12-06 13:57:00.079639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:76408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.223 [2024-12-06 13:57:00.079651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:42.223 [2024-12-06 13:57:00.079672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:76416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.223 [2024-12-06 13:57:00.079685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:42.223 [2024-12-06 13:57:00.079702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:76424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.223 [2024-12-06 13:57:00.079723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:42.223 [2024-12-06 13:57:00.079742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:76432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.223 [2024-12-06 13:57:00.079765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:42.223 [2024-12-06 13:57:00.079784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:76440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.223 [2024-12-06 13:57:00.079797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:42.223 [2024-12-06 13:57:00.079815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.223 [2024-12-06 13:57:00.079828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:42.223 [2024-12-06 13:57:00.079849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:76456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.223 [2024-12-06 13:57:00.079864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:42.223 [2024-12-06 13:57:00.079882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:76464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.223 [2024-12-06 13:57:00.079895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:42.223 [2024-12-06 13:57:00.079913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:75816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.223 [2024-12-06 13:57:00.079927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:42.223 [2024-12-06 13:57:00.079945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:75824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.223 [2024-12-06 13:57:00.079958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:42.223 [2024-12-06 13:57:00.079976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.223 [2024-12-06 13:57:00.079989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.223 [2024-12-06 13:57:00.080007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:75840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.223 [2024-12-06 13:57:00.080021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:42.223 [2024-12-06 13:57:00.080040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:75848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.223 [2024-12-06 13:57:00.080054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:42.223 [2024-12-06 13:57:00.080072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:75856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.223 [2024-12-06 13:57:00.080085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:42.223 [2024-12-06 13:57:00.080115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:75864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.223 [2024-12-06 13:57:00.080138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:42.223 [2024-12-06 13:57:00.080159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:75872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.223 [2024-12-06 13:57:00.080174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:42.223 [2024-12-06 13:57:00.080192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:76472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.223 [2024-12-06 13:57:00.080205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:42.223 [2024-12-06 13:57:00.080223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:76480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.223 [2024-12-06 13:57:00.080236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:42.223 [2024-12-06 13:57:00.080254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:76488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.223 [2024-12-06 13:57:00.080267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:42.223 [2024-12-06 13:57:00.080285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:76496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.223 [2024-12-06 13:57:00.080298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:42.223 [2024-12-06 13:57:00.080316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:76504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.223 [2024-12-06 13:57:00.080329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:42.224 [2024-12-06 13:57:00.080347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:76512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.224 [2024-12-06 13:57:00.080360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:42.224 [2024-12-06 13:57:00.080665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:76520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.224 [2024-12-06 13:57:00.080687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:42.224 [2024-12-06 13:57:00.080713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:76528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.224 [2024-12-06 13:57:00.080728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:42.224 [2024-12-06 13:57:00.080750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:76536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.224 [2024-12-06 13:57:00.080764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:42.224 [2024-12-06 13:57:00.080785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:76544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.224 [2024-12-06 13:57:00.080799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:42.224 [2024-12-06 13:57:00.080820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:76552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.224 [2024-12-06 13:57:00.080842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:42.224 [2024-12-06 13:57:00.080866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:76560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.224 [2024-12-06 13:57:00.080881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:42.224 [2024-12-06 13:57:00.080903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:76568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.224 [2024-12-06 13:57:00.080916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:42.224 [2024-12-06 13:57:00.080938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:76576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.224 [2024-12-06 13:57:00.080951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:42.224 [2024-12-06 13:57:00.080973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:76584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.224 [2024-12-06 13:57:00.080986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:42.224 [2024-12-06 13:57:00.081008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:76592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.224 [2024-12-06 13:57:00.081021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:42.224 [2024-12-06 13:57:00.081042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:76600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.224 [2024-12-06 13:57:00.081056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:42.224 [2024-12-06 13:57:00.081078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:76608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.224 [2024-12-06 13:57:00.081091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:42.224 [2024-12-06 13:57:00.081129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:76616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.224 [2024-12-06 13:57:00.081144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:42.224 [2024-12-06 13:57:00.081166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:75880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.224 [2024-12-06 13:57:00.081179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:42.224 [2024-12-06 13:57:00.081201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:75888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.224 [2024-12-06 13:57:00.081215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:42.224 [2024-12-06 13:57:00.081236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:75896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.224 [2024-12-06 13:57:00.081249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:42.224 [2024-12-06 13:57:00.081271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:75904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.224 [2024-12-06 13:57:00.081284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:42.224 [2024-12-06 13:57:00.081314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:75912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.224 [2024-12-06 13:57:00.081328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:42.224 [2024-12-06 13:57:00.081351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:75920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.224 [2024-12-06 13:57:00.081364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:42.224 [2024-12-06 13:57:00.081386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.224 [2024-12-06 13:57:00.081400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:42.224 [2024-12-06 13:57:00.081422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:75936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.224 [2024-12-06 13:57:00.081435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:42.224 [2024-12-06 13:57:00.081456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:75944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.224 [2024-12-06 13:57:00.081470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:42.224 [2024-12-06 13:57:00.081492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:75952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.224 [2024-12-06 13:57:00.081505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:42.224 [2024-12-06 13:57:00.081527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:75960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.224 [2024-12-06 13:57:00.081541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:42.224 [2024-12-06 13:57:00.081562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:75968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.224 [2024-12-06 13:57:00.081576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:42.225 [2024-12-06 13:57:00.081598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:75976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.225 [2024-12-06 13:57:00.081611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:42.225 [2024-12-06 13:57:00.081633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:75984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.225 [2024-12-06 13:57:00.081646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:42.225 [2024-12-06 13:57:00.081668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:75992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.225 [2024-12-06 13:57:00.081682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:42.225 [2024-12-06 13:57:00.081703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:76000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.225 [2024-12-06 13:57:00.081717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:42.225 [2024-12-06 13:57:00.081744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:76008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.225 [2024-12-06 13:57:00.081759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:42.225 [2024-12-06 13:57:00.081781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:76016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.225 [2024-12-06 13:57:00.081803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:42.225 [2024-12-06 13:57:00.081826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:76024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.225 [2024-12-06 13:57:00.081839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:42.225 [2024-12-06 13:57:00.081861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:76032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.225 [2024-12-06 13:57:00.081875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:42.225 [2024-12-06 13:57:00.081910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:76040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.225 [2024-12-06 13:57:00.081927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:42.225 [2024-12-06 13:57:00.081950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:76048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.225 [2024-12-06 13:57:00.081963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:42.225 [2024-12-06 13:57:00.081985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:76056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.225 [2024-12-06 13:57:00.081999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:42.225 [2024-12-06 13:57:00.082020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:76064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.225 [2024-12-06 13:57:00.082034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:42.225 [2024-12-06 13:57:00.082055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:76624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.225 [2024-12-06 13:57:00.082069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:42.225 [2024-12-06 13:57:00.082091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:76632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.225 [2024-12-06 13:57:00.082116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:42.225 [2024-12-06 13:57:00.082140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:76640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.225 [2024-12-06 13:57:00.082153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:42.225 [2024-12-06 13:57:00.082175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:76648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.225 [2024-12-06 13:57:00.082188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:42.225 [2024-12-06 13:57:00.082210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:76656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.225 [2024-12-06 13:57:00.082231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:42.225 [2024-12-06 13:57:00.082254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:76664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.225 [2024-12-06 13:57:00.082268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:42.225 [2024-12-06 13:57:00.082289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:76672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.225 [2024-12-06 13:57:00.082303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:42.225 [2024-12-06 13:57:00.082324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:76680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.225 [2024-12-06 13:57:00.082337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:42.225 [2024-12-06 13:57:00.082358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:76688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.225 [2024-12-06 13:57:00.082371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:42.225 [2024-12-06 13:57:00.082393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:76696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.225 [2024-12-06 13:57:00.082410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:42.225 [2024-12-06 13:57:00.082432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:76704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.225 [2024-12-06 13:57:00.082445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:42.225 [2024-12-06 13:57:00.082467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:76072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.225 [2024-12-06 13:57:00.082480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:42.225 [2024-12-06 13:57:00.082502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:76080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.225 [2024-12-06 13:57:00.082515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:42.225 [2024-12-06 13:57:00.082536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:76088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.225 [2024-12-06 13:57:00.082550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:42.225 [2024-12-06 13:57:00.082572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:76096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.226 [2024-12-06 13:57:00.082585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:42.226 [2024-12-06 13:57:00.082607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.226 [2024-12-06 13:57:00.082620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:42.226 [2024-12-06 13:57:00.082641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:76112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.226 [2024-12-06 13:57:00.082666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:42.226 [2024-12-06 13:57:00.082690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:76120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.226 [2024-12-06 13:57:00.082712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:42.226 [2024-12-06 13:57:00.082734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:76128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.226 [2024-12-06 13:57:00.082747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:42.226 9368.87 IOPS, 36.60 MiB/s [2024-12-06T13:57:41.630Z] 8887.88 IOPS, 34.72 MiB/s [2024-12-06T13:57:41.630Z] 8892.12 IOPS, 34.73 MiB/s [2024-12-06T13:57:41.630Z] 8960.33 IOPS, 35.00 MiB/s [2024-12-06T13:57:41.630Z] 9024.32 IOPS, 35.25 MiB/s [2024-12-06T13:57:41.630Z] 9068.70 IOPS, 35.42 MiB/s [2024-12-06T13:57:41.630Z] 9121.05 IOPS, 35.63 MiB/s [2024-12-06T13:57:41.630Z] [2024-12-06 13:57:07.112007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:26160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.226 [2024-12-06 13:57:07.112071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:42.226 [2024-12-06 13:57:07.112179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:26168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.226 [2024-12-06 13:57:07.112207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:42.226 [2024-12-06 13:57:07.112231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:26176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.226 [2024-12-06 13:57:07.112263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:42.226 [2024-12-06 13:57:07.112284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:26184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.226 [2024-12-06 13:57:07.112298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:42.226 [2024-12-06 13:57:07.112318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:26192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.226 [2024-12-06 13:57:07.112332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:42.226 [2024-12-06 13:57:07.112353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:26200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.226 [2024-12-06 13:57:07.112368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:42.226 [2024-12-06 13:57:07.112388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:26208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.226 [2024-12-06 13:57:07.112402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:42.226 [2024-12-06 13:57:07.112427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:26216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.226 [2024-12-06 13:57:07.112441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:42.226 [2024-12-06 13:57:07.112460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:26224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.226 [2024-12-06 13:57:07.112475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:42.226 [2024-12-06 13:57:07.112515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:26232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.226 [2024-12-06 13:57:07.112531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:42.226 [2024-12-06 13:57:07.112551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:26240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.226 [2024-12-06 13:57:07.112566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:42.226 [2024-12-06 13:57:07.112585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:26248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.226 [2024-12-06 13:57:07.112599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:42.226 [2024-12-06 13:57:07.112619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:26256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.226 [2024-12-06 13:57:07.112633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:42.226 [2024-12-06 13:57:07.112653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:26264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.226 [2024-12-06 13:57:07.112681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:42.226 [2024-12-06 13:57:07.112700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:26272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.226 [2024-12-06 13:57:07.112714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:42.226 [2024-12-06 13:57:07.112733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:26280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.226 [2024-12-06 13:57:07.112747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:42.226 [2024-12-06 13:57:07.112766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:26288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.226 [2024-12-06 13:57:07.112780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:42.226 [2024-12-06 13:57:07.112802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:26296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.226 [2024-12-06 13:57:07.112817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:42.226 [2024-12-06 13:57:07.112836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:26304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.226 [2024-12-06 13:57:07.112850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:42.226 [2024-12-06 13:57:07.112869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:26312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.226 [2024-12-06 13:57:07.112883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:42.226 [2024-12-06 13:57:07.112902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:25648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.226 [2024-12-06 13:57:07.112916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:42.226 [2024-12-06 13:57:07.112935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:25656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.226 [2024-12-06 13:57:07.112960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:42.227 [2024-12-06 13:57:07.112981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:25664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.227 [2024-12-06 13:57:07.112996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:42.227 [2024-12-06 13:57:07.113015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:25672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.227 [2024-12-06 13:57:07.113030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:42.227 [2024-12-06 13:57:07.113049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:25680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.227 [2024-12-06 13:57:07.113063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:42.227 [2024-12-06 13:57:07.113082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:25688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.227 [2024-12-06 13:57:07.113096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:42.227 [2024-12-06 13:57:07.113116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:25696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.227 [2024-12-06 13:57:07.113143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:42.227 [2024-12-06 13:57:07.113166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:25704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.227 [2024-12-06 13:57:07.113181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:42.227 [2024-12-06 13:57:07.113201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:25712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.227 [2024-12-06 13:57:07.113215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:42.227 [2024-12-06 13:57:07.113234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:25720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.227 [2024-12-06 13:57:07.113248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:42.227 [2024-12-06 13:57:07.113267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:25728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.227 [2024-12-06 13:57:07.113282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:42.227 [2024-12-06 13:57:07.113301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:25736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.227 [2024-12-06 13:57:07.113315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:42.227 [2024-12-06 13:57:07.113335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.227 [2024-12-06 13:57:07.113350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:42.227 [2024-12-06 13:57:07.113369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:25752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.227 [2024-12-06 13:57:07.113391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:42.227 [2024-12-06 13:57:07.113412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.227 [2024-12-06 13:57:07.113427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:42.227 [2024-12-06 13:57:07.113446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:25768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.227 [2024-12-06 13:57:07.113461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:42.227 [2024-12-06 13:57:07.113480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:26320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.227 [2024-12-06 13:57:07.113494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:42.227 [2024-12-06 13:57:07.113513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:26328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.227 [2024-12-06 13:57:07.113527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:42.227 [2024-12-06 13:57:07.113546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:26336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.227 [2024-12-06 13:57:07.113560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:42.227 [2024-12-06 13:57:07.113580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:26344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.227 [2024-12-06 13:57:07.113611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:42.227 [2024-12-06 13:57:07.113656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:26352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.227 [2024-12-06 13:57:07.113675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:42.227 [2024-12-06 13:57:07.113696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:26360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.227 [2024-12-06 13:57:07.113711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:42.227 [2024-12-06 13:57:07.113730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:26368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.227 [2024-12-06 13:57:07.113745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:42.227 [2024-12-06 13:57:07.113764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:26376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.227 [2024-12-06 13:57:07.113778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:42.227 [2024-12-06 13:57:07.113798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:26384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.227 [2024-12-06 13:57:07.113812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:42.227 [2024-12-06 13:57:07.113832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:26392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.227 [2024-12-06 13:57:07.113846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:42.227 [2024-12-06 13:57:07.113876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:26400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.227 [2024-12-06 13:57:07.113891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:42.227 [2024-12-06 13:57:07.113911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:26408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.227 [2024-12-06 13:57:07.113926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:42.227 [2024-12-06 13:57:07.113945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:25776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.227 [2024-12-06 13:57:07.113960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:42.227 [2024-12-06 13:57:07.113981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:25784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.227 [2024-12-06 13:57:07.113996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:42.227 [2024-12-06 13:57:07.114016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:25792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.227 [2024-12-06 13:57:07.114030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:42.227 [2024-12-06 13:57:07.114050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:25800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.227 [2024-12-06 13:57:07.114065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:42.227 [2024-12-06 13:57:07.114085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:25808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.227 [2024-12-06 13:57:07.114099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:42.227 [2024-12-06 13:57:07.114147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:25816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.227 [2024-12-06 13:57:07.114165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:42.227 [2024-12-06 13:57:07.114186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.228 [2024-12-06 13:57:07.114201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:42.228 [2024-12-06 13:57:07.114222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:25832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.228 [2024-12-06 13:57:07.114237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:42.228 [2024-12-06 13:57:07.114257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:26416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.228 [2024-12-06 13:57:07.114272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:42.228 [2024-12-06 13:57:07.114293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:26424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.228 [2024-12-06 13:57:07.114307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:42.228 [2024-12-06 13:57:07.114331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:26432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.228 [2024-12-06 13:57:07.114350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:42.228 [2024-12-06 13:57:07.114371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.228 [2024-12-06 13:57:07.114386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:42.228 [2024-12-06 13:57:07.114407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.228 [2024-12-06 13:57:07.114422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:42.228 [2024-12-06 13:57:07.114443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:26456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.228 [2024-12-06 13:57:07.114457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:42.228 [2024-12-06 13:57:07.114478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:26464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.228 [2024-12-06 13:57:07.114493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:42.228 [2024-12-06 13:57:07.114528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.228 [2024-12-06 13:57:07.114543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:42.228 [2024-12-06 13:57:07.114562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:26480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.228 [2024-12-06 13:57:07.114578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:42.228 [2024-12-06 13:57:07.114599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:26488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.228 [2024-12-06 13:57:07.114613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:42.228 [2024-12-06 13:57:07.114633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:26496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.228 [2024-12-06 13:57:07.114647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:42.228 [2024-12-06 13:57:07.114667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:26504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.228 [2024-12-06 13:57:07.114682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:42.228 [2024-12-06 13:57:07.114701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:26512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.228 [2024-12-06 13:57:07.114716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:42.228 [2024-12-06 13:57:07.114735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:26520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.228 [2024-12-06 13:57:07.114750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:42.228 [2024-12-06 13:57:07.114770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:26528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.228 [2024-12-06 13:57:07.114791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:42.228 [2024-12-06 13:57:07.114812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:26536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.228 [2024-12-06 13:57:07.114827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:42.228 [2024-12-06 13:57:07.114847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:25840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.228 [2024-12-06 13:57:07.114861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:42.228 [2024-12-06 13:57:07.114881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:25848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.228 [2024-12-06 13:57:07.114896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:42.228 [2024-12-06 13:57:07.114916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.228 [2024-12-06 13:57:07.114930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:42.228 [2024-12-06 13:57:07.114950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:25864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.228 [2024-12-06 13:57:07.114965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:42.228 [2024-12-06 13:57:07.114985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:25872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.228 [2024-12-06 13:57:07.115000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:42.228 [2024-12-06 13:57:07.115020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:25880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.228 [2024-12-06 13:57:07.115034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:42.228 [2024-12-06 13:57:07.115054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.228 [2024-12-06 13:57:07.115068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:42.228 [2024-12-06 13:57:07.115089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.228 [2024-12-06 13:57:07.115103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:42.228 [2024-12-06 13:57:07.115150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:25904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.228 [2024-12-06 13:57:07.115167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:42.228 [2024-12-06 13:57:07.115189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:25912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.228 [2024-12-06 13:57:07.115204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:42.228 [2024-12-06 13:57:07.115225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:25920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.228 [2024-12-06 13:57:07.115247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:42.228 [2024-12-06 13:57:07.115269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:25928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.228 [2024-12-06 13:57:07.115284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:42.228 [2024-12-06 13:57:07.115305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:25936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.228 [2024-12-06 13:57:07.115331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:42.228 [2024-12-06 13:57:07.115355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:25944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.228 [2024-12-06 13:57:07.115370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:42.228 [2024-12-06 13:57:07.115391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:25952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.228 [2024-12-06 13:57:07.115406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:42.229 [2024-12-06 13:57:07.115427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:25960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.229 [2024-12-06 13:57:07.115442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:42.229 [2024-12-06 13:57:07.115463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:25968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.229 [2024-12-06 13:57:07.115478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:42.229 [2024-12-06 13:57:07.115498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:25976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.229 [2024-12-06 13:57:07.115513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:42.229 [2024-12-06 13:57:07.115534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:25984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.229 [2024-12-06 13:57:07.115548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:42.229 [2024-12-06 13:57:07.115569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:25992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.229 [2024-12-06 13:57:07.115584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:42.229 [2024-12-06 13:57:07.115619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:26000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.229 [2024-12-06 13:57:07.115634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:42.229 [2024-12-06 13:57:07.115654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:26008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.229 [2024-12-06 13:57:07.115669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:42.229 [2024-12-06 13:57:07.115689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.229 [2024-12-06 13:57:07.115719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:42.229 [2024-12-06 13:57:07.115747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:26024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.229 [2024-12-06 13:57:07.115763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:42.229 [2024-12-06 13:57:07.115788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:26544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.229 [2024-12-06 13:57:07.115804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:42.229 [2024-12-06 13:57:07.115826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:26552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.229 [2024-12-06 13:57:07.115841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:42.229 [2024-12-06 13:57:07.115861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:26560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.229 [2024-12-06 13:57:07.115876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:42.229 [2024-12-06 13:57:07.115897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:26568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.229 [2024-12-06 13:57:07.115911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:42.229 [2024-12-06 13:57:07.115932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:26576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.229 [2024-12-06 13:57:07.115947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:42.229 [2024-12-06 13:57:07.115967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:26584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.229 [2024-12-06 13:57:07.115999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:42.229 [2024-12-06 13:57:07.116037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:26592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.229 [2024-12-06 13:57:07.116053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:42.229 [2024-12-06 13:57:07.116075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.229 [2024-12-06 13:57:07.116090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:42.229 [2024-12-06 13:57:07.116112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:26032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.229 [2024-12-06 13:57:07.116129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:42.229 [2024-12-06 13:57:07.116151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:26040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.229 [2024-12-06 13:57:07.116181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:42.229 [2024-12-06 13:57:07.116205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:26048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.229 [2024-12-06 13:57:07.116221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:42.229 [2024-12-06 13:57:07.116251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:26056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.229 [2024-12-06 13:57:07.116268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:42.229 [2024-12-06 13:57:07.116290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:26064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.229 [2024-12-06 13:57:07.116306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:42.229 [2024-12-06 13:57:07.116328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:26072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.229 [2024-12-06 13:57:07.116352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:42.229 [2024-12-06 13:57:07.116375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:26080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.229 [2024-12-06 13:57:07.116391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:42.229 [2024-12-06 13:57:07.116413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:26088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.229 [2024-12-06 13:57:07.116444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:42.229 [2024-12-06 13:57:07.116465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:26096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.229 [2024-12-06 13:57:07.116480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:42.229 [2024-12-06 13:57:07.116502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:26104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.229 [2024-12-06 13:57:07.116517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:42.229 [2024-12-06 13:57:07.116538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:26112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.229 [2024-12-06 13:57:07.116553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:42.229 [2024-12-06 13:57:07.116574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:26120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.229 [2024-12-06 13:57:07.116590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:42.229 [2024-12-06 13:57:07.116611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:26128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.229 [2024-12-06 13:57:07.116641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.229 [2024-12-06 13:57:07.116661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:26136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.229 [2024-12-06 13:57:07.116676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:42.229 [2024-12-06 13:57:07.116698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:26144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.229 [2024-12-06 13:57:07.116714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:42.229 [2024-12-06 13:57:07.117347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:26152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.229 [2024-12-06 13:57:07.117384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:42.229 [2024-12-06 13:57:07.117418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:26608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.229 [2024-12-06 13:57:07.117451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:42.229 [2024-12-06 13:57:07.117478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:26616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.229 [2024-12-06 13:57:07.117492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:42.229 [2024-12-06 13:57:07.117519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:26624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.229 [2024-12-06 13:57:07.117533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:42.229 [2024-12-06 13:57:07.117559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:26632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.229 [2024-12-06 13:57:07.117574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:42.229 [2024-12-06 13:57:07.117601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:26640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.229 [2024-12-06 13:57:07.117615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:42.229 [2024-12-06 13:57:07.117642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:26648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.229 [2024-12-06 13:57:07.117663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:42.230 [2024-12-06 13:57:07.117691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:26656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.230 [2024-12-06 13:57:07.117706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:42.230 [2024-12-06 13:57:07.117747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:26664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.230 [2024-12-06 13:57:07.117766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:42.230 9082.45 IOPS, 35.48 MiB/s [2024-12-06T13:57:41.634Z] 8687.57 IOPS, 33.94 MiB/s [2024-12-06T13:57:41.634Z] 8325.58 IOPS, 32.52 MiB/s [2024-12-06T13:57:41.634Z] 7992.56 IOPS, 31.22 MiB/s [2024-12-06T13:57:41.634Z] 7685.15 IOPS, 30.02 MiB/s [2024-12-06T13:57:41.634Z] 7400.52 IOPS, 28.91 MiB/s [2024-12-06T13:57:41.634Z] 7136.21 IOPS, 27.88 MiB/s [2024-12-06T13:57:41.634Z] 6935.97 IOPS, 27.09 MiB/s [2024-12-06T13:57:41.634Z] 6994.10 IOPS, 27.32 MiB/s [2024-12-06T13:57:41.634Z] 7072.23 IOPS, 27.63 MiB/s [2024-12-06T13:57:41.634Z] 7147.97 IOPS, 27.92 MiB/s [2024-12-06T13:57:41.634Z] 7211.36 IOPS, 28.17 MiB/s [2024-12-06T13:57:41.634Z] 7270.32 IOPS, 28.40 MiB/s [2024-12-06T13:57:41.634Z] 7326.83 IOPS, 28.62 MiB/s [2024-12-06T13:57:41.634Z] [2024-12-06 13:57:20.508191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:103928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.230 [2024-12-06 13:57:20.508244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:42.230 [2024-12-06 13:57:20.508313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.230 [2024-12-06 13:57:20.508333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:42.230 [2024-12-06 13:57:20.508437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:103944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.230 [2024-12-06 13:57:20.508453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.230 [2024-12-06 13:57:20.508473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:103952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.230 [2024-12-06 13:57:20.508486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:42.230 [2024-12-06 13:57:20.508520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:103960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.230 [2024-12-06 13:57:20.508533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:42.230 [2024-12-06 13:57:20.508552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:103968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.230 [2024-12-06 13:57:20.508565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:42.230 [2024-12-06 13:57:20.508583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.230 [2024-12-06 13:57:20.508596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:42.230 [2024-12-06 13:57:20.508615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:103984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.230 [2024-12-06 13:57:20.508628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:42.230 [2024-12-06 13:57:20.508646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:103416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.230 [2024-12-06 13:57:20.508659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:42.230 [2024-12-06 13:57:20.508678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:103424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.230 [2024-12-06 13:57:20.508692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:42.230 [2024-12-06 13:57:20.508710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:103432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.230 [2024-12-06 13:57:20.508723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:42.230 [2024-12-06 13:57:20.508741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:103440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.230 [2024-12-06 13:57:20.508755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:42.230 [2024-12-06 13:57:20.508773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.230 [2024-12-06 13:57:20.508786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:42.230 [2024-12-06 13:57:20.508805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:103456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.230 [2024-12-06 13:57:20.508818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:42.230 [2024-12-06 13:57:20.508836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:103464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.230 [2024-12-06 13:57:20.508858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:42.230 [2024-12-06 13:57:20.508879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:103472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.230 [2024-12-06 13:57:20.508893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:42.230 [2024-12-06 13:57:20.508938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:103992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.230 [2024-12-06 13:57:20.508957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.230 [2024-12-06 13:57:20.508973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.230 [2024-12-06 13:57:20.508985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.230 [2024-12-06 13:57:20.508998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:104008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.230 [2024-12-06 13:57:20.509010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.230 [2024-12-06 13:57:20.509024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:104016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.230 [2024-12-06 13:57:20.509036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.230 [2024-12-06 13:57:20.509049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:104024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.230 [2024-12-06 13:57:20.509060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.230 [2024-12-06 13:57:20.509073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:104032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.230 [2024-12-06 13:57:20.509085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.230 [2024-12-06 13:57:20.509097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:104040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.230 [2024-12-06 13:57:20.509109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.230 [2024-12-06 13:57:20.509137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:104048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.230 [2024-12-06 13:57:20.509150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.230 [2024-12-06 13:57:20.509163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:104056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.230 [2024-12-06 13:57:20.509175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.230 [2024-12-06 13:57:20.509188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:104064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.230 [2024-12-06 13:57:20.509200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.230 [2024-12-06 13:57:20.509212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:104072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.230 [2024-12-06 13:57:20.509224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.230 [2024-12-06 13:57:20.509247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:104080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.230 [2024-12-06 13:57:20.509259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.230 [2024-12-06 13:57:20.509272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:104088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.230 [2024-12-06 13:57:20.509284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.230 [2024-12-06 13:57:20.509297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:104096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.230 [2024-12-06 13:57:20.509309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.230 [2024-12-06 13:57:20.509322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:104104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.230 [2024-12-06 13:57:20.509334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.230 [2024-12-06 13:57:20.509347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:104112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.230 [2024-12-06 13:57:20.509359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.230 [2024-12-06 13:57:20.509371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:104120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.230 [2024-12-06 13:57:20.509383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.230 [2024-12-06 13:57:20.509396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:104128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.230 [2024-12-06 13:57:20.509408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.230 [2024-12-06 13:57:20.509421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:104136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.231 [2024-12-06 13:57:20.509432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.231 [2024-12-06 13:57:20.509446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:104144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.231 [2024-12-06 13:57:20.509457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.231 [2024-12-06 13:57:20.509470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:103480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.231 [2024-12-06 13:57:20.509482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.231 [2024-12-06 13:57:20.509494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:103488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.231 [2024-12-06 13:57:20.509506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.231 [2024-12-06 13:57:20.509519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:103496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.231 [2024-12-06 13:57:20.509531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.231 [2024-12-06 13:57:20.509543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:103504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.231 [2024-12-06 13:57:20.509561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.231 [2024-12-06 13:57:20.509575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:103512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.231 [2024-12-06 13:57:20.509587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.231 [2024-12-06 13:57:20.509600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:103520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.231 [2024-12-06 13:57:20.509611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.231 [2024-12-06 13:57:20.509624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:103528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.231 [2024-12-06 13:57:20.509636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.231 [2024-12-06 13:57:20.509649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:103536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.231 [2024-12-06 13:57:20.509661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.231 [2024-12-06 13:57:20.509673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:103544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.231 [2024-12-06 13:57:20.509685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.231 [2024-12-06 13:57:20.509698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:103552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.231 [2024-12-06 13:57:20.509709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.231 [2024-12-06 13:57:20.509722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:103560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.231 [2024-12-06 13:57:20.509734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.231 [2024-12-06 13:57:20.509746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:103568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.231 [2024-12-06 13:57:20.509758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.231 [2024-12-06 13:57:20.509770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:103576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.231 [2024-12-06 13:57:20.509782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.231 [2024-12-06 13:57:20.509795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:103584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.231 [2024-12-06 13:57:20.509808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.231 [2024-12-06 13:57:20.509821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:103592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.231 [2024-12-06 13:57:20.509833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.231 [2024-12-06 13:57:20.509846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:103600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.231 [2024-12-06 13:57:20.509858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.231 [2024-12-06 13:57:20.509877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:104152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.231 [2024-12-06 13:57:20.509890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.231 [2024-12-06 13:57:20.509903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.231 [2024-12-06 13:57:20.509914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.231 [2024-12-06 13:57:20.509927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:104168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.231 [2024-12-06 13:57:20.509938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.231 [2024-12-06 13:57:20.509951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:104176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.231 [2024-12-06 13:57:20.509962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.231 [2024-12-06 13:57:20.509975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.231 [2024-12-06 13:57:20.509987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.231 [2024-12-06 13:57:20.509999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:104192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.231 [2024-12-06 13:57:20.510011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.231 [2024-12-06 13:57:20.510024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:104200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.231 [2024-12-06 13:57:20.510036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.231 [2024-12-06 13:57:20.510049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:104208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.231 [2024-12-06 13:57:20.510060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.231 [2024-12-06 13:57:20.510073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.231 [2024-12-06 13:57:20.510085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.231 [2024-12-06 13:57:20.510108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:104224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.231 [2024-12-06 13:57:20.510122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.231 [2024-12-06 13:57:20.510135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:104232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.231 [2024-12-06 13:57:20.510147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.231 [2024-12-06 13:57:20.510160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:104240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.231 [2024-12-06 13:57:20.510172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.231 [2024-12-06 13:57:20.510184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:103608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.231 [2024-12-06 13:57:20.510202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.231 [2024-12-06 13:57:20.510216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:103616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.231 [2024-12-06 13:57:20.510229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.231 [2024-12-06 13:57:20.510242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:103624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.231 [2024-12-06 13:57:20.510255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.231 [2024-12-06 13:57:20.510268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:103632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.231 [2024-12-06 13:57:20.510280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.231 [2024-12-06 13:57:20.510293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:103640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.231 [2024-12-06 13:57:20.510304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.231 [2024-12-06 13:57:20.510317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:103648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.231 [2024-12-06 13:57:20.510328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.231 [2024-12-06 13:57:20.510341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:103656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.231 [2024-12-06 13:57:20.510353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.231 [2024-12-06 13:57:20.510366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:103664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.231 [2024-12-06 13:57:20.510377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.231 [2024-12-06 13:57:20.510390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:104248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.231 [2024-12-06 13:57:20.510401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.231 [2024-12-06 13:57:20.510414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:104256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.231 [2024-12-06 13:57:20.510426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.232 [2024-12-06 13:57:20.510439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:104264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.232 [2024-12-06 13:57:20.510450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.232 [2024-12-06 13:57:20.510463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:104272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.232 [2024-12-06 13:57:20.510474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.232 [2024-12-06 13:57:20.510488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.232 [2024-12-06 13:57:20.510500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.232 [2024-12-06 13:57:20.510520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:104288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.232 [2024-12-06 13:57:20.510533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.232 [2024-12-06 13:57:20.510546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:104296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.232 [2024-12-06 13:57:20.510558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.232 [2024-12-06 13:57:20.510570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:104304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.232 [2024-12-06 13:57:20.510582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.232 [2024-12-06 13:57:20.510595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:104312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.232 [2024-12-06 13:57:20.510606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.232 [2024-12-06 13:57:20.510620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:104320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.232 [2024-12-06 13:57:20.510632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.232 [2024-12-06 13:57:20.510645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:104328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.232 [2024-12-06 13:57:20.510656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.232 [2024-12-06 13:57:20.510678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:104336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.232 [2024-12-06 13:57:20.510690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.232 [2024-12-06 13:57:20.510703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:104344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.232 [2024-12-06 13:57:20.510715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.232 [2024-12-06 13:57:20.510727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:104352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.232 [2024-12-06 13:57:20.510739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.232 [2024-12-06 13:57:20.510752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:104360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.232 [2024-12-06 13:57:20.510763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.232 [2024-12-06 13:57:20.510794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:104368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.232 [2024-12-06 13:57:20.510806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.232 [2024-12-06 13:57:20.510819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.232 [2024-12-06 13:57:20.510831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.232 [2024-12-06 13:57:20.510845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:103680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.232 [2024-12-06 13:57:20.510856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.232 [2024-12-06 13:57:20.510876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:103688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.232 [2024-12-06 13:57:20.510888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.232 [2024-12-06 13:57:20.510902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:103696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.232 [2024-12-06 13:57:20.510913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.232 [2024-12-06 13:57:20.510927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:103704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.232 [2024-12-06 13:57:20.510939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.232 [2024-12-06 13:57:20.510952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:103712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.232 [2024-12-06 13:57:20.510964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.232 [2024-12-06 13:57:20.510977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:103720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.232 [2024-12-06 13:57:20.510989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.232 [2024-12-06 13:57:20.511002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:103728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.232 [2024-12-06 13:57:20.511014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.232 [2024-12-06 13:57:20.511027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:103736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.232 [2024-12-06 13:57:20.511039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.232 [2024-12-06 13:57:20.511058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:103744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.232 [2024-12-06 13:57:20.511071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.232 [2024-12-06 13:57:20.511084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:103752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.232 [2024-12-06 13:57:20.511096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.232 [2024-12-06 13:57:20.511123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:103760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.232 [2024-12-06 13:57:20.511138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.232 [2024-12-06 13:57:20.511151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:103768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.232 [2024-12-06 13:57:20.511164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.232 [2024-12-06 13:57:20.511177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:103776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.232 [2024-12-06 13:57:20.511189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.232 [2024-12-06 13:57:20.511203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:103784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.232 [2024-12-06 13:57:20.511221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.232 [2024-12-06 13:57:20.511235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:103792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.232 [2024-12-06 13:57:20.511248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.232 [2024-12-06 13:57:20.511261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:103800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.232 [2024-12-06 13:57:20.511274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.232 [2024-12-06 13:57:20.511287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:103808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.232 [2024-12-06 13:57:20.511299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.232 [2024-12-06 13:57:20.511339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:103816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.232 [2024-12-06 13:57:20.511354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.232 [2024-12-06 13:57:20.511369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:103824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.232 [2024-12-06 13:57:20.511381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.232 [2024-12-06 13:57:20.511395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:103832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.232 [2024-12-06 13:57:20.511407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.232 [2024-12-06 13:57:20.511421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:103840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.233 [2024-12-06 13:57:20.511434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.233 [2024-12-06 13:57:20.511447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:103848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.233 [2024-12-06 13:57:20.511460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.233 [2024-12-06 13:57:20.511473] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e0310 is same with the state(6) to be set 00:18:42.233 [2024-12-06 13:57:20.511488] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:42.233 [2024-12-06 13:57:20.511497] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:42.233 [2024-12-06 13:57:20.511507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103856 len:8 PRP1 0x0 PRP2 0x0 00:18:42.233 [2024-12-06 13:57:20.511524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.233 [2024-12-06 13:57:20.511538] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:42.233 [2024-12-06 13:57:20.511547] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:42.233 [2024-12-06 13:57:20.511557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104376 len:8 PRP1 0x0 PRP2 0x0 00:18:42.233 [2024-12-06 13:57:20.511573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.233 [2024-12-06 13:57:20.511604] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:42.233 [2024-12-06 13:57:20.511615] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:42.233 [2024-12-06 13:57:20.511625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104384 len:8 PRP1 0x0 PRP2 0x0 00:18:42.233 [2024-12-06 13:57:20.511637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.233 [2024-12-06 13:57:20.511664] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:42.233 [2024-12-06 13:57:20.511673] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:42.233 [2024-12-06 13:57:20.511682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104392 len:8 PRP1 0x0 PRP2 0x0 00:18:42.233 [2024-12-06 13:57:20.511693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.233 [2024-12-06 13:57:20.511705] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:42.233 [2024-12-06 13:57:20.511714] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:42.233 [2024-12-06 13:57:20.511723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104400 len:8 PRP1 0x0 PRP2 0x0 00:18:42.233 [2024-12-06 13:57:20.511735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.233 [2024-12-06 13:57:20.511747] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:42.233 [2024-12-06 13:57:20.511756] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:42.233 [2024-12-06 13:57:20.511765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104408 len:8 PRP1 0x0 PRP2 0x0 00:18:42.233 [2024-12-06 13:57:20.511776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.233 [2024-12-06 13:57:20.511788] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:42.233 [2024-12-06 13:57:20.511797] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:42.233 [2024-12-06 13:57:20.511806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104416 len:8 PRP1 0x0 PRP2 0x0 00:18:42.233 [2024-12-06 13:57:20.511817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.233 [2024-12-06 13:57:20.511829] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:42.233 [2024-12-06 13:57:20.511837] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:42.233 [2024-12-06 13:57:20.511846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104424 len:8 PRP1 0x0 PRP2 0x0 00:18:42.233 [2024-12-06 13:57:20.511857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.233 [2024-12-06 13:57:20.511869] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:42.233 [2024-12-06 13:57:20.511878] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:42.233 [2024-12-06 13:57:20.511887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104432 len:8 PRP1 0x0 PRP2 0x0 00:18:42.233 [2024-12-06 13:57:20.511903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.233 [2024-12-06 13:57:20.511915] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:42.233 [2024-12-06 13:57:20.511924] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:42.233 [2024-12-06 13:57:20.511933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103864 len:8 PRP1 0x0 PRP2 0x0 00:18:42.233 [2024-12-06 13:57:20.511954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.233 [2024-12-06 13:57:20.511967] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:42.233 [2024-12-06 13:57:20.511976] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:42.233 [2024-12-06 13:57:20.511985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103872 len:8 PRP1 0x0 PRP2 0x0 00:18:42.233 [2024-12-06 13:57:20.511996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.233 [2024-12-06 13:57:20.512008] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:42.233 [2024-12-06 13:57:20.512017] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:42.233 [2024-12-06 13:57:20.512026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103880 len:8 PRP1 0x0 PRP2 0x0 00:18:42.233 [2024-12-06 13:57:20.512038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.233 [2024-12-06 13:57:20.512049] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:42.233 [2024-12-06 13:57:20.512058] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:42.233 [2024-12-06 13:57:20.512067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103888 len:8 PRP1 0x0 PRP2 0x0 00:18:42.233 [2024-12-06 13:57:20.512079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.233 [2024-12-06 13:57:20.512090] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:42.233 [2024-12-06 13:57:20.512099] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:42.233 [2024-12-06 13:57:20.512124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103896 len:8 PRP1 0x0 PRP2 0x0 00:18:42.233 [2024-12-06 13:57:20.512148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.233 [2024-12-06 13:57:20.512161] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:42.233 [2024-12-06 13:57:20.512170] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:42.233 [2024-12-06 13:57:20.512180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103904 len:8 PRP1 0x0 PRP2 0x0 00:18:42.233 [2024-12-06 13:57:20.512192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.233 [2024-12-06 13:57:20.512204] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:42.233 [2024-12-06 13:57:20.512213] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:42.233 [2024-12-06 13:57:20.512222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103912 len:8 PRP1 0x0 PRP2 0x0 00:18:42.233 [2024-12-06 13:57:20.512234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.233 [2024-12-06 13:57:20.512246] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:42.233 [2024-12-06 13:57:20.512255] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:42.233 [2024-12-06 13:57:20.512265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103920 len:8 PRP1 0x0 PRP2 0x0 00:18:42.233 [2024-12-06 13:57:20.512281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.233 [2024-12-06 13:57:20.512421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:42.233 [2024-12-06 13:57:20.512446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.233 [2024-12-06 13:57:20.512470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:42.233 [2024-12-06 13:57:20.512485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.233 [2024-12-06 13:57:20.512497] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:42.233 [2024-12-06 13:57:20.512524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.233 [2024-12-06 13:57:20.512536] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:42.233 [2024-12-06 13:57:20.512548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.233 [2024-12-06 13:57:20.512561] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.233 [2024-12-06 13:57:20.512573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.233 [2024-12-06 13:57:20.512591] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450e90 is same with the state(6) to be set 00:18:42.233 [2024-12-06 13:57:20.513624] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:18:42.233 [2024-12-06 13:57:20.513685] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1450e90 (9): Bad file descriptor 00:18:42.233 [2024-12-06 13:57:20.514043] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:42.233 [2024-12-06 13:57:20.514074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1450e90 with addr=10.0.0.3, port=4421 00:18:42.233 [2024-12-06 13:57:20.514090] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1450e90 is same with the state(6) to be set 00:18:42.233 [2024-12-06 13:57:20.514154] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1450e90 (9): Bad file descriptor 00:18:42.233 [2024-12-06 13:57:20.514185] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:18:42.234 [2024-12-06 13:57:20.514200] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:18:42.234 [2024-12-06 13:57:20.514214] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:18:42.234 [2024-12-06 13:57:20.514226] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:18:42.234 [2024-12-06 13:57:20.514239] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:18:42.234 7394.89 IOPS, 28.89 MiB/s [2024-12-06T13:57:41.638Z] 7457.62 IOPS, 29.13 MiB/s [2024-12-06T13:57:41.638Z] 7525.05 IOPS, 29.39 MiB/s [2024-12-06T13:57:41.638Z] 7590.77 IOPS, 29.65 MiB/s [2024-12-06T13:57:41.638Z] 7649.80 IOPS, 29.88 MiB/s [2024-12-06T13:57:41.638Z] 7706.34 IOPS, 30.10 MiB/s [2024-12-06T13:57:41.638Z] 7761.71 IOPS, 30.32 MiB/s [2024-12-06T13:57:41.638Z] 7810.14 IOPS, 30.51 MiB/s [2024-12-06T13:57:41.638Z] 7860.91 IOPS, 30.71 MiB/s [2024-12-06T13:57:41.638Z] 7911.64 IOPS, 30.90 MiB/s [2024-12-06T13:57:41.638Z] [2024-12-06 13:57:30.579297] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:18:42.234 7939.28 IOPS, 31.01 MiB/s [2024-12-06T13:57:41.638Z] 7965.94 IOPS, 31.12 MiB/s [2024-12-06T13:57:41.638Z] 7992.15 IOPS, 31.22 MiB/s [2024-12-06T13:57:41.638Z] 8021.55 IOPS, 31.33 MiB/s [2024-12-06T13:57:41.638Z] 8040.94 IOPS, 31.41 MiB/s [2024-12-06T13:57:41.638Z] 8066.49 IOPS, 31.51 MiB/s [2024-12-06T13:57:41.638Z] 8090.10 IOPS, 31.60 MiB/s [2024-12-06T13:57:41.638Z] 8107.45 IOPS, 31.67 MiB/s [2024-12-06T13:57:41.638Z] 8130.65 IOPS, 31.76 MiB/s [2024-12-06T13:57:41.638Z] 8154.18 IOPS, 31.85 MiB/s [2024-12-06T13:57:41.638Z] Received shutdown signal, test time was about 55.512539 seconds 00:18:42.234 00:18:42.234 Latency(us) 00:18:42.234 [2024-12-06T13:57:41.638Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:42.234 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:42.234 Verification LBA range: start 0x0 length 0x4000 00:18:42.234 Nvme0n1 : 55.51 8160.26 31.88 0.00 0.00 15657.79 1012.83 7046430.72 00:18:42.234 [2024-12-06T13:57:41.638Z] =================================================================================================================== 00:18:42.234 [2024-12-06T13:57:41.638Z] Total : 8160.26 31.88 0.00 0.00 15657.79 1012.83 7046430.72 00:18:42.234 13:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:42.234 13:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:18:42.234 13:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:42.234 13:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:18:42.234 13:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:42.234 13:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # sync 00:18:42.234 13:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:42.234 13:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set +e 00:18:42.234 13:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:42.234 13:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:42.234 rmmod nvme_tcp 00:18:42.234 rmmod nvme_fabrics 00:18:42.234 rmmod nvme_keyring 00:18:42.234 13:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:42.234 13:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@128 -- # set -e 00:18:42.234 13:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@129 -- # return 0 00:18:42.234 13:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@517 -- # '[' -n 80676 ']' 00:18:42.234 13:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@518 -- # killprocess 80676 00:18:42.234 13:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 80676 ']' 00:18:42.234 13:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 80676 00:18:42.234 13:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:18:42.234 13:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:42.234 13:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80676 00:18:42.234 13:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:42.234 13:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:42.234 killing process with pid 80676 00:18:42.234 13:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80676' 00:18:42.234 13:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 80676 00:18:42.234 13:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 80676 00:18:42.234 13:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:42.234 13:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:42.234 13:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:42.234 13:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@297 -- # iptr 00:18:42.234 13:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-save 00:18:42.234 13:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:42.234 13:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:18:42.234 13:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:42.234 13:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:42.234 13:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:42.234 13:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:42.234 13:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:42.234 13:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:42.493 13:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:42.493 13:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:42.493 13:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:42.493 13:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:42.493 13:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:42.493 13:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:42.493 13:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:42.493 13:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:42.493 13:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:42.493 13:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:42.493 13:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:42.493 13:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:42.493 13:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:42.493 13:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@300 -- # return 0 00:18:42.493 00:18:42.493 real 1m0.602s 00:18:42.493 user 2m48.024s 00:18:42.493 sys 0m17.948s 00:18:42.493 13:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:42.493 ************************************ 00:18:42.493 13:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:42.493 END TEST nvmf_host_multipath 00:18:42.493 ************************************ 00:18:42.493 13:57:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:18:42.493 13:57:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:42.493 13:57:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:42.493 13:57:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.493 ************************************ 00:18:42.493 START TEST nvmf_timeout 00:18:42.493 ************************************ 00:18:42.493 13:57:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:18:42.493 * Looking for test storage... 00:18:42.493 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:42.493 13:57:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:42.753 13:57:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:42.753 13:57:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1711 -- # lcov --version 00:18:42.753 13:57:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:42.753 13:57:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:42.753 13:57:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:42.753 13:57:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:42.753 13:57:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:18:42.753 13:57:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:18:42.753 13:57:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:18:42.753 13:57:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:18:42.753 13:57:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:18:42.753 13:57:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:18:42.753 13:57:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:18:42.753 13:57:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:42.753 13:57:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in 00:18:42.753 13:57:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1 00:18:42.753 13:57:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:42.753 13:57:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:42.753 13:57:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1 00:18:42.753 13:57:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1 00:18:42.753 13:57:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:42.753 13:57:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1 00:18:42.753 13:57:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:18:42.753 13:57:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2 00:18:42.753 13:57:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2 00:18:42.753 13:57:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:42.753 13:57:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2 00:18:42.753 13:57:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:18:42.753 13:57:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:42.753 13:57:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:42.753 13:57:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0 00:18:42.753 13:57:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:42.753 13:57:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:42.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:42.753 --rc genhtml_branch_coverage=1 00:18:42.753 --rc genhtml_function_coverage=1 00:18:42.753 --rc genhtml_legend=1 00:18:42.753 --rc geninfo_all_blocks=1 00:18:42.753 --rc geninfo_unexecuted_blocks=1 00:18:42.753 00:18:42.753 ' 00:18:42.753 13:57:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:42.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:42.753 --rc genhtml_branch_coverage=1 00:18:42.753 --rc genhtml_function_coverage=1 00:18:42.753 --rc genhtml_legend=1 00:18:42.753 --rc geninfo_all_blocks=1 00:18:42.753 --rc geninfo_unexecuted_blocks=1 00:18:42.753 00:18:42.753 ' 00:18:42.753 13:57:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:42.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:42.753 --rc genhtml_branch_coverage=1 00:18:42.753 --rc genhtml_function_coverage=1 00:18:42.753 --rc genhtml_legend=1 00:18:42.753 --rc geninfo_all_blocks=1 00:18:42.753 --rc geninfo_unexecuted_blocks=1 00:18:42.753 00:18:42.753 ' 00:18:42.753 13:57:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:42.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:42.753 --rc genhtml_branch_coverage=1 00:18:42.753 --rc genhtml_function_coverage=1 00:18:42.753 --rc genhtml_legend=1 00:18:42.753 --rc geninfo_all_blocks=1 00:18:42.753 --rc geninfo_unexecuted_blocks=1 00:18:42.753 00:18:42.753 ' 00:18:42.753 13:57:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:42.753 13:57:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:18:42.753 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:42.753 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:42.753 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:42.753 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:42.753 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:42.753 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:42.753 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:42.753 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:42.753 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:42.753 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:42.753 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:18:42.753 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=cfa2def7-c8af-457f-82a0-b312efdea7f4 00:18:42.753 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:42.753 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:42.753 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:42.753 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:42.753 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:42.753 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:18:42.753 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:42.753 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:42.753 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:42.753 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.753 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.753 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.753 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:18:42.753 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.753 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # : 0 00:18:42.753 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:42.753 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:42.753 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:42.753 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:42.753 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:42.753 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:42.753 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:42.753 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:42.753 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:42.753 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:42.753 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:42.753 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:42.753 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:42.754 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:18:42.754 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:42.754 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:18:42.754 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:42.754 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:42.754 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:42.754 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:42.754 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:42.754 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:42.754 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:42.754 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:42.754 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:42.754 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:42.754 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:42.754 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:42.754 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:42.754 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:42.754 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:42.754 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:42.754 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:42.754 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:42.754 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:42.754 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:42.754 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:42.754 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:42.754 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:42.754 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:42.754 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:42.754 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:42.754 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:42.754 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:42.754 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:42.754 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:42.754 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:42.754 Cannot find device "nvmf_init_br" 00:18:42.754 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:18:42.754 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:42.754 Cannot find device "nvmf_init_br2" 00:18:42.754 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:18:42.754 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:42.754 Cannot find device "nvmf_tgt_br" 00:18:42.754 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # true 00:18:42.754 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:42.754 Cannot find device "nvmf_tgt_br2" 00:18:42.754 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # true 00:18:42.754 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:42.754 Cannot find device "nvmf_init_br" 00:18:42.754 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # true 00:18:42.754 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:42.754 Cannot find device "nvmf_init_br2" 00:18:42.754 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # true 00:18:42.754 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:42.754 Cannot find device "nvmf_tgt_br" 00:18:42.754 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # true 00:18:42.754 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:42.754 Cannot find device "nvmf_tgt_br2" 00:18:42.754 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # true 00:18:42.754 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:42.754 Cannot find device "nvmf_br" 00:18:42.754 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # true 00:18:42.754 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:42.754 Cannot find device "nvmf_init_if" 00:18:42.754 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # true 00:18:42.754 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:43.013 Cannot find device "nvmf_init_if2" 00:18:43.013 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # true 00:18:43.013 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:43.013 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:43.013 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # true 00:18:43.013 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:43.013 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:43.013 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # true 00:18:43.013 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:43.013 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:43.013 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:43.013 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:43.013 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:43.013 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:43.013 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:43.013 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:43.013 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:43.013 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:43.013 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:43.013 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:43.013 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:43.013 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:43.013 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:43.013 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:43.013 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:43.013 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:43.013 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:43.013 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:43.013 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:43.013 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:43.013 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:43.013 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:43.013 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:43.013 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:43.013 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:43.013 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:43.013 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:43.013 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:43.013 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:43.013 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:43.014 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:43.014 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:43.014 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:18:43.014 00:18:43.014 --- 10.0.0.3 ping statistics --- 00:18:43.014 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:43.014 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:18:43.014 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:43.014 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:43.014 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:18:43.014 00:18:43.014 --- 10.0.0.4 ping statistics --- 00:18:43.014 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:43.014 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:18:43.014 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:43.014 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:43.014 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:18:43.014 00:18:43.014 --- 10.0.0.1 ping statistics --- 00:18:43.014 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:43.014 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:18:43.014 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:43.014 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:43.014 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:18:43.014 00:18:43.014 --- 10.0.0.2 ping statistics --- 00:18:43.014 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:43.014 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:18:43.014 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:43.014 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@461 -- # return 0 00:18:43.014 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:43.014 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:43.014 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:43.014 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:43.014 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:43.014 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:43.014 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:43.014 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:18:43.014 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:43.014 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:43.014 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:43.014 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@509 -- # nvmfpid=81884 00:18:43.014 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:18:43.014 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@510 -- # waitforlisten 81884 00:18:43.014 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 81884 ']' 00:18:43.014 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:43.014 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:43.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:43.014 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:43.014 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:43.014 13:57:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:43.271 [2024-12-06 13:57:42.460354] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:18:43.271 [2024-12-06 13:57:42.460458] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:43.271 [2024-12-06 13:57:42.601825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:43.271 [2024-12-06 13:57:42.654256] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:43.271 [2024-12-06 13:57:42.654326] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:43.271 [2024-12-06 13:57:42.654353] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:43.271 [2024-12-06 13:57:42.654361] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:43.271 [2024-12-06 13:57:42.654368] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:43.271 [2024-12-06 13:57:42.655670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:43.271 [2024-12-06 13:57:42.655678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:43.529 [2024-12-06 13:57:42.707496] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:44.096 13:57:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:44.096 13:57:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:18:44.096 13:57:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:44.096 13:57:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:44.096 13:57:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:44.096 13:57:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:44.096 13:57:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:44.096 13:57:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:44.354 [2024-12-06 13:57:43.683238] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:44.354 13:57:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:44.612 Malloc0 00:18:44.612 13:57:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:44.870 13:57:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:45.127 13:57:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:45.385 [2024-12-06 13:57:44.679253] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:45.385 13:57:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=81930 00:18:45.385 13:57:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 81930 /var/tmp/bdevperf.sock 00:18:45.385 13:57:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:18:45.385 13:57:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 81930 ']' 00:18:45.385 13:57:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:45.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:45.385 13:57:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:45.385 13:57:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:45.385 13:57:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:45.385 13:57:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:45.385 [2024-12-06 13:57:44.753550] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:18:45.385 [2024-12-06 13:57:44.753634] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81930 ] 00:18:45.643 [2024-12-06 13:57:44.906368] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:45.643 [2024-12-06 13:57:44.960088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:45.643 [2024-12-06 13:57:45.016434] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:45.900 13:57:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:45.900 13:57:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:18:45.900 13:57:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:46.158 13:57:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:18:46.417 NVMe0n1 00:18:46.417 13:57:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=81945 00:18:46.417 13:57:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:46.417 13:57:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:18:46.417 Running I/O for 10 seconds... 00:18:47.352 13:57:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:47.613 7957.00 IOPS, 31.08 MiB/s [2024-12-06T13:57:47.017Z] [2024-12-06 13:57:46.828649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:70808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:47.613 [2024-12-06 13:57:46.828715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.613 [2024-12-06 13:57:46.828752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:69920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.613 [2024-12-06 13:57:46.828762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.613 [2024-12-06 13:57:46.828773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:69928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.613 [2024-12-06 13:57:46.828782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.613 [2024-12-06 13:57:46.828792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:69936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.613 [2024-12-06 13:57:46.828800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.613 [2024-12-06 13:57:46.828810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:69944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.613 [2024-12-06 13:57:46.828818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.613 [2024-12-06 13:57:46.828828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:69952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.613 [2024-12-06 13:57:46.828836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.613 [2024-12-06 13:57:46.828847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:69960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.613 [2024-12-06 13:57:46.828855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.613 [2024-12-06 13:57:46.828864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:69968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.613 [2024-12-06 13:57:46.828872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.613 [2024-12-06 13:57:46.828882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:70816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:47.613 [2024-12-06 13:57:46.828890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.613 [2024-12-06 13:57:46.828900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:69976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.613 [2024-12-06 13:57:46.828908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.613 [2024-12-06 13:57:46.828918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:69984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.613 [2024-12-06 13:57:46.828926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.613 [2024-12-06 13:57:46.828936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:69992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.614 [2024-12-06 13:57:46.828944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.614 [2024-12-06 13:57:46.828954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:70000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.614 [2024-12-06 13:57:46.828979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.614 [2024-12-06 13:57:46.829006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:70008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.614 [2024-12-06 13:57:46.829032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.614 [2024-12-06 13:57:46.829042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:70016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.614 [2024-12-06 13:57:46.829052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.614 [2024-12-06 13:57:46.829063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:70024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.614 [2024-12-06 13:57:46.829073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.614 [2024-12-06 13:57:46.829085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:70032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.614 [2024-12-06 13:57:46.829094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.614 [2024-12-06 13:57:46.829105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:70040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.614 [2024-12-06 13:57:46.829114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.614 [2024-12-06 13:57:46.829125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:70048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.614 [2024-12-06 13:57:46.829133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.614 [2024-12-06 13:57:46.829144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:70056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.614 [2024-12-06 13:57:46.829153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.614 [2024-12-06 13:57:46.829177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:70064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.614 [2024-12-06 13:57:46.829188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.614 [2024-12-06 13:57:46.829200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:70072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.614 [2024-12-06 13:57:46.829208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.614 [2024-12-06 13:57:46.829219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:70080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.614 [2024-12-06 13:57:46.829228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.614 [2024-12-06 13:57:46.829238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:70088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.614 [2024-12-06 13:57:46.829247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.614 [2024-12-06 13:57:46.829258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:70096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.614 [2024-12-06 13:57:46.829266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.614 [2024-12-06 13:57:46.829276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:70104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.614 [2024-12-06 13:57:46.829285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.614 [2024-12-06 13:57:46.829296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:70112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.614 [2024-12-06 13:57:46.829304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.614 [2024-12-06 13:57:46.829315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:70120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.614 [2024-12-06 13:57:46.829324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.614 [2024-12-06 13:57:46.829334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:70128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.614 [2024-12-06 13:57:46.829343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.614 [2024-12-06 13:57:46.829354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:70136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.614 [2024-12-06 13:57:46.829363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.614 [2024-12-06 13:57:46.829374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:70144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.614 [2024-12-06 13:57:46.829382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.614 [2024-12-06 13:57:46.829393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:70152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.614 [2024-12-06 13:57:46.829402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.614 [2024-12-06 13:57:46.829413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:70160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.614 [2024-12-06 13:57:46.829422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.614 [2024-12-06 13:57:46.829433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:70168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.614 [2024-12-06 13:57:46.829442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.614 [2024-12-06 13:57:46.829453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:70176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.614 [2024-12-06 13:57:46.829461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.614 [2024-12-06 13:57:46.829472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:70184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.614 [2024-12-06 13:57:46.829481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.614 [2024-12-06 13:57:46.829492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:70192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.614 [2024-12-06 13:57:46.829500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.614 [2024-12-06 13:57:46.829511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:70200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.614 [2024-12-06 13:57:46.829520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.614 [2024-12-06 13:57:46.829530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:70208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.614 [2024-12-06 13:57:46.829539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.614 [2024-12-06 13:57:46.829550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:70216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.614 [2024-12-06 13:57:46.829559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.614 [2024-12-06 13:57:46.829569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:70224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.614 [2024-12-06 13:57:46.829578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.614 [2024-12-06 13:57:46.829589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:70232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.615 [2024-12-06 13:57:46.829597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.615 [2024-12-06 13:57:46.829608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:70240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.615 [2024-12-06 13:57:46.829616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.615 [2024-12-06 13:57:46.829627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:70248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.615 [2024-12-06 13:57:46.829636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.615 [2024-12-06 13:57:46.829647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:70256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.615 [2024-12-06 13:57:46.829656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.615 [2024-12-06 13:57:46.829667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:70264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.615 [2024-12-06 13:57:46.829676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.615 [2024-12-06 13:57:46.829686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:70272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.615 [2024-12-06 13:57:46.829695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.615 [2024-12-06 13:57:46.829706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:70280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.615 [2024-12-06 13:57:46.829715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.615 [2024-12-06 13:57:46.829725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:70288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.615 [2024-12-06 13:57:46.829734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.615 [2024-12-06 13:57:46.829745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:70296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.615 [2024-12-06 13:57:46.829754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.615 [2024-12-06 13:57:46.829765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:70304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.615 [2024-12-06 13:57:46.829774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.615 [2024-12-06 13:57:46.829784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:70312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.615 [2024-12-06 13:57:46.829794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.615 [2024-12-06 13:57:46.829805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:70320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.615 [2024-12-06 13:57:46.829813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.615 [2024-12-06 13:57:46.829832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:70328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.615 [2024-12-06 13:57:46.829842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.615 [2024-12-06 13:57:46.829852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:70336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.615 [2024-12-06 13:57:46.829861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.615 [2024-12-06 13:57:46.829872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:70344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.615 [2024-12-06 13:57:46.829881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.615 [2024-12-06 13:57:46.829892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:70352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.615 [2024-12-06 13:57:46.829900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.615 [2024-12-06 13:57:46.829911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:70360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.615 [2024-12-06 13:57:46.829920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.615 [2024-12-06 13:57:46.829930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:70368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.615 [2024-12-06 13:57:46.829939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.615 [2024-12-06 13:57:46.829949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:70376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.615 [2024-12-06 13:57:46.829959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.615 [2024-12-06 13:57:46.829970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:70384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.615 [2024-12-06 13:57:46.829979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.615 [2024-12-06 13:57:46.829989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:70392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.615 [2024-12-06 13:57:46.829998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.615 [2024-12-06 13:57:46.830008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:70400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.615 [2024-12-06 13:57:46.830017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.615 [2024-12-06 13:57:46.830028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:70408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.615 [2024-12-06 13:57:46.830037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.615 [2024-12-06 13:57:46.830047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:70416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.615 [2024-12-06 13:57:46.830056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.615 [2024-12-06 13:57:46.830066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:70424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.615 [2024-12-06 13:57:46.830075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.615 [2024-12-06 13:57:46.830085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:70432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.615 [2024-12-06 13:57:46.830094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.615 [2024-12-06 13:57:46.830114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:70440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.615 [2024-12-06 13:57:46.830124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.615 [2024-12-06 13:57:46.830135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:70448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.615 [2024-12-06 13:57:46.830144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.615 [2024-12-06 13:57:46.830159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:70456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.615 [2024-12-06 13:57:46.830168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.615 [2024-12-06 13:57:46.830179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:70464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.615 [2024-12-06 13:57:46.830188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.615 [2024-12-06 13:57:46.830199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:70472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.615 [2024-12-06 13:57:46.830208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.616 [2024-12-06 13:57:46.830218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:70480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.616 [2024-12-06 13:57:46.830227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.616 [2024-12-06 13:57:46.830238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:70488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.616 [2024-12-06 13:57:46.830247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.616 [2024-12-06 13:57:46.830257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:70496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.616 [2024-12-06 13:57:46.830266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.616 [2024-12-06 13:57:46.830276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:70504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.616 [2024-12-06 13:57:46.830285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.616 [2024-12-06 13:57:46.830297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:70512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.616 [2024-12-06 13:57:46.830306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.616 [2024-12-06 13:57:46.830317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:70520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.616 [2024-12-06 13:57:46.830325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.616 [2024-12-06 13:57:46.830336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:70528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.616 [2024-12-06 13:57:46.830345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.616 [2024-12-06 13:57:46.830355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:70536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.616 [2024-12-06 13:57:46.830364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.616 [2024-12-06 13:57:46.830375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:70544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.616 [2024-12-06 13:57:46.830383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.616 [2024-12-06 13:57:46.830394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:70552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.616 [2024-12-06 13:57:46.830403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.616 [2024-12-06 13:57:46.830413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:70560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.616 [2024-12-06 13:57:46.830421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.616 [2024-12-06 13:57:46.830432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:70568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.616 [2024-12-06 13:57:46.830441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.616 [2024-12-06 13:57:46.830451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:70576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.616 [2024-12-06 13:57:46.830460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.616 [2024-12-06 13:57:46.830476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:70584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.616 [2024-12-06 13:57:46.830485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.616 [2024-12-06 13:57:46.830496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:70592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.616 [2024-12-06 13:57:46.830505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.616 [2024-12-06 13:57:46.830515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:70600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.616 [2024-12-06 13:57:46.830524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.616 [2024-12-06 13:57:46.830535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:70608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.616 [2024-12-06 13:57:46.830543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.616 [2024-12-06 13:57:46.830554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:70616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.616 [2024-12-06 13:57:46.830563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.616 [2024-12-06 13:57:46.830573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:70624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.616 [2024-12-06 13:57:46.830582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.616 [2024-12-06 13:57:46.830593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:70632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.616 [2024-12-06 13:57:46.830602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.616 [2024-12-06 13:57:46.830613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:70640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.616 [2024-12-06 13:57:46.830622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.616 [2024-12-06 13:57:46.830633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:70648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.616 [2024-12-06 13:57:46.830642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.616 [2024-12-06 13:57:46.830653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:70656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.616 [2024-12-06 13:57:46.830662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.616 [2024-12-06 13:57:46.830672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:70664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.616 [2024-12-06 13:57:46.830681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.616 [2024-12-06 13:57:46.830692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:70672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.616 [2024-12-06 13:57:46.830700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.616 [2024-12-06 13:57:46.830711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:70680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.616 [2024-12-06 13:57:46.830719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.616 [2024-12-06 13:57:46.830730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:70688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.616 [2024-12-06 13:57:46.830738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.616 [2024-12-06 13:57:46.830749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:70696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.616 [2024-12-06 13:57:46.830758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.616 [2024-12-06 13:57:46.830768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:70704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.616 [2024-12-06 13:57:46.830776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.616 [2024-12-06 13:57:46.830791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:70712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.616 [2024-12-06 13:57:46.830800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.616 [2024-12-06 13:57:46.830810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:70720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.616 [2024-12-06 13:57:46.830819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.616 [2024-12-06 13:57:46.830829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:70728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.616 [2024-12-06 13:57:46.830838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.617 [2024-12-06 13:57:46.830849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:70736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.617 [2024-12-06 13:57:46.830858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.617 [2024-12-06 13:57:46.830868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:70744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.617 [2024-12-06 13:57:46.830877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.617 [2024-12-06 13:57:46.830887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:70752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.617 [2024-12-06 13:57:46.830896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.617 [2024-12-06 13:57:46.830906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:70760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.617 [2024-12-06 13:57:46.830915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.617 [2024-12-06 13:57:46.830926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:70768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.617 [2024-12-06 13:57:46.830935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.617 [2024-12-06 13:57:46.830945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:70776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.617 [2024-12-06 13:57:46.830954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.617 [2024-12-06 13:57:46.830965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:70784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.617 [2024-12-06 13:57:46.830973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.617 [2024-12-06 13:57:46.830984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:70792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.617 [2024-12-06 13:57:46.830992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.617 [2024-12-06 13:57:46.831003] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2a2e0 is same with the state(6) to be set 00:18:47.617 [2024-12-06 13:57:46.831014] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:47.617 [2024-12-06 13:57:46.831021] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:47.617 [2024-12-06 13:57:46.831029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:70800 len:8 PRP1 0x0 PRP2 0x0 00:18:47.617 [2024-12-06 13:57:46.831037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.617 [2024-12-06 13:57:46.831047] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:47.617 [2024-12-06 13:57:46.831054] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:47.617 [2024-12-06 13:57:46.831061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70824 len:8 PRP1 0x0 PRP2 0x0 00:18:47.617 [2024-12-06 13:57:46.831070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.617 [2024-12-06 13:57:46.831078] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:47.617 [2024-12-06 13:57:46.831090] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:47.617 [2024-12-06 13:57:46.831105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70832 len:8 PRP1 0x0 PRP2 0x0 00:18:47.617 [2024-12-06 13:57:46.831132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.617 [2024-12-06 13:57:46.831142] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:47.617 [2024-12-06 13:57:46.831149] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:47.617 [2024-12-06 13:57:46.831156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70840 len:8 PRP1 0x0 PRP2 0x0 00:18:47.617 [2024-12-06 13:57:46.831165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.617 [2024-12-06 13:57:46.831174] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:47.617 [2024-12-06 13:57:46.831182] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:47.617 [2024-12-06 13:57:46.831189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70848 len:8 PRP1 0x0 PRP2 0x0 00:18:47.617 [2024-12-06 13:57:46.831198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.617 [2024-12-06 13:57:46.831207] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:47.617 [2024-12-06 13:57:46.831215] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:47.617 [2024-12-06 13:57:46.831223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70856 len:8 PRP1 0x0 PRP2 0x0 00:18:47.617 [2024-12-06 13:57:46.831232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.617 [2024-12-06 13:57:46.831241] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:47.617 [2024-12-06 13:57:46.831248] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:47.617 [2024-12-06 13:57:46.831256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70864 len:8 PRP1 0x0 PRP2 0x0 00:18:47.617 [2024-12-06 13:57:46.831265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.617 [2024-12-06 13:57:46.831274] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:47.617 [2024-12-06 13:57:46.831281] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:47.617 [2024-12-06 13:57:46.831288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70872 len:8 PRP1 0x0 PRP2 0x0 00:18:47.617 [2024-12-06 13:57:46.831297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.617 [2024-12-06 13:57:46.831315] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:47.617 [2024-12-06 13:57:46.831327] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:47.617 [2024-12-06 13:57:46.831335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70880 len:8 PRP1 0x0 PRP2 0x0 00:18:47.617 [2024-12-06 13:57:46.831352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.617 [2024-12-06 13:57:46.831361] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:47.617 [2024-12-06 13:57:46.831369] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:47.617 [2024-12-06 13:57:46.831376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70888 len:8 PRP1 0x0 PRP2 0x0 00:18:47.617 [2024-12-06 13:57:46.831385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.617 [2024-12-06 13:57:46.831402] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:47.617 [2024-12-06 13:57:46.831413] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:47.617 [2024-12-06 13:57:46.831421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70896 len:8 PRP1 0x0 PRP2 0x0 00:18:47.617 [2024-12-06 13:57:46.831430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.617 [2024-12-06 13:57:46.831439] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:47.617 [2024-12-06 13:57:46.831446] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:47.617 [2024-12-06 13:57:46.831454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70904 len:8 PRP1 0x0 PRP2 0x0 00:18:47.617 [2024-12-06 13:57:46.831463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.617 [2024-12-06 13:57:46.831472] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:47.617 [2024-12-06 13:57:46.831479] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:47.617 [2024-12-06 13:57:46.831487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70912 len:8 PRP1 0x0 PRP2 0x0 00:18:47.617 [2024-12-06 13:57:46.831495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.618 [2024-12-06 13:57:46.831505] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:47.618 [2024-12-06 13:57:46.831511] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:47.618 [2024-12-06 13:57:46.831519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70920 len:8 PRP1 0x0 PRP2 0x0 00:18:47.618 [2024-12-06 13:57:46.831529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.618 [2024-12-06 13:57:46.831538] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:47.618 [2024-12-06 13:57:46.831546] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:47.618 [2024-12-06 13:57:46.831553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70928 len:8 PRP1 0x0 PRP2 0x0 00:18:47.618 [2024-12-06 13:57:46.831562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.618 [2024-12-06 13:57:46.831571] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:47.618 [2024-12-06 13:57:46.831578] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:47.618 [2024-12-06 13:57:46.831601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70936 len:8 PRP1 0x0 PRP2 0x0 00:18:47.618 [2024-12-06 13:57:46.831610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.618 [2024-12-06 13:57:46.831727] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:47.618 [2024-12-06 13:57:46.831744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.618 [2024-12-06 13:57:46.831755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:47.618 [2024-12-06 13:57:46.831763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.618 [2024-12-06 13:57:46.831774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:47.618 [2024-12-06 13:57:46.831782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.618 [2024-12-06 13:57:46.831791] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:47.618 [2024-12-06 13:57:46.831800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.618 [2024-12-06 13:57:46.831809] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bc070 is same with the state(6) to be set 00:18:47.618 [2024-12-06 13:57:46.832041] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:18:47.618 [2024-12-06 13:57:46.832076] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19bc070 (9): Bad file descriptor 00:18:47.618 [2024-12-06 13:57:46.832202] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:47.618 [2024-12-06 13:57:46.832231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bc070 with addr=10.0.0.3, port=4420 00:18:47.618 [2024-12-06 13:57:46.832243] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bc070 is same with the state(6) to be set 00:18:47.618 [2024-12-06 13:57:46.832261] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19bc070 (9): Bad file descriptor 00:18:47.618 [2024-12-06 13:57:46.832277] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:18:47.618 [2024-12-06 13:57:46.832286] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:18:47.618 [2024-12-06 13:57:46.832297] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:18:47.618 [2024-12-06 13:57:46.832307] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:18:47.618 [2024-12-06 13:57:46.832318] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:18:47.618 13:57:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:18:49.496 4370.00 IOPS, 17.07 MiB/s [2024-12-06T13:57:48.900Z] 2913.33 IOPS, 11.38 MiB/s [2024-12-06T13:57:48.900Z] [2024-12-06 13:57:48.832472] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:49.496 [2024-12-06 13:57:48.832552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bc070 with addr=10.0.0.3, port=4420 00:18:49.496 [2024-12-06 13:57:48.832566] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bc070 is same with the state(6) to be set 00:18:49.496 [2024-12-06 13:57:48.832587] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19bc070 (9): Bad file descriptor 00:18:49.496 [2024-12-06 13:57:48.832605] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:18:49.496 [2024-12-06 13:57:48.832615] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:18:49.496 [2024-12-06 13:57:48.832625] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:18:49.496 [2024-12-06 13:57:48.832635] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:18:49.496 [2024-12-06 13:57:48.832645] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:18:49.496 13:57:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:18:49.496 13:57:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:49.496 13:57:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:18:49.755 13:57:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:18:49.755 13:57:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:18:49.755 13:57:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:18:49.755 13:57:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:18:50.014 13:57:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:18:50.014 13:57:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:18:51.518 2185.00 IOPS, 8.54 MiB/s [2024-12-06T13:57:50.922Z] 1748.00 IOPS, 6.83 MiB/s [2024-12-06T13:57:50.922Z] [2024-12-06 13:57:50.832866] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:51.518 [2024-12-06 13:57:50.832951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bc070 with addr=10.0.0.3, port=4420 00:18:51.518 [2024-12-06 13:57:50.832967] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bc070 is same with the state(6) to be set 00:18:51.518 [2024-12-06 13:57:50.832991] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19bc070 (9): Bad file descriptor 00:18:51.518 [2024-12-06 13:57:50.833020] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:18:51.518 [2024-12-06 13:57:50.833031] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:18:51.518 [2024-12-06 13:57:50.833042] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:18:51.518 [2024-12-06 13:57:50.833052] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:18:51.518 [2024-12-06 13:57:50.833064] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:18:53.393 1456.67 IOPS, 5.69 MiB/s [2024-12-06T13:57:53.056Z] 1248.57 IOPS, 4.88 MiB/s [2024-12-06T13:57:53.056Z] [2024-12-06 13:57:52.833132] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:18:53.652 [2024-12-06 13:57:52.833187] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:18:53.652 [2024-12-06 13:57:52.833213] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:18:53.653 [2024-12-06 13:57:52.833223] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:18:53.653 [2024-12-06 13:57:52.833234] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:18:54.590 1092.50 IOPS, 4.27 MiB/s 00:18:54.590 Latency(us) 00:18:54.590 [2024-12-06T13:57:53.994Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:54.590 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:54.590 Verification LBA range: start 0x0 length 0x4000 00:18:54.590 NVMe0n1 : 8.10 1078.55 4.21 15.80 0.00 116763.73 3381.06 7015926.69 00:18:54.590 [2024-12-06T13:57:53.994Z] =================================================================================================================== 00:18:54.590 [2024-12-06T13:57:53.994Z] Total : 1078.55 4.21 15.80 0.00 116763.73 3381.06 7015926.69 00:18:54.590 { 00:18:54.590 "results": [ 00:18:54.590 { 00:18:54.590 "job": "NVMe0n1", 00:18:54.590 "core_mask": "0x4", 00:18:54.590 "workload": "verify", 00:18:54.590 "status": "finished", 00:18:54.590 "verify_range": { 00:18:54.590 "start": 0, 00:18:54.590 "length": 16384 00:18:54.590 }, 00:18:54.590 "queue_depth": 128, 00:18:54.590 "io_size": 4096, 00:18:54.590 "runtime": 8.103489, 00:18:54.590 "iops": 1078.5477712131158, 00:18:54.590 "mibps": 4.2130772313012335, 00:18:54.590 "io_failed": 128, 00:18:54.590 "io_timeout": 0, 00:18:54.590 "avg_latency_us": 116763.72908188788, 00:18:54.590 "min_latency_us": 3381.061818181818, 00:18:54.590 "max_latency_us": 7015926.69090909 00:18:54.590 } 00:18:54.590 ], 00:18:54.590 "core_count": 1 00:18:54.590 } 00:18:55.157 13:57:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:18:55.157 13:57:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:55.157 13:57:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:18:55.417 13:57:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:18:55.417 13:57:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:18:55.417 13:57:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:18:55.417 13:57:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:18:55.688 13:57:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:18:55.688 13:57:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 81945 00:18:55.688 13:57:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 81930 00:18:55.688 13:57:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 81930 ']' 00:18:55.688 13:57:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 81930 00:18:55.688 13:57:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:18:55.688 13:57:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:55.688 13:57:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81930 00:18:55.688 13:57:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:55.688 13:57:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:55.688 killing process with pid 81930 00:18:55.688 13:57:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81930' 00:18:55.688 Received shutdown signal, test time was about 9.288375 seconds 00:18:55.688 00:18:55.688 Latency(us) 00:18:55.688 [2024-12-06T13:57:55.092Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:55.688 [2024-12-06T13:57:55.092Z] =================================================================================================================== 00:18:55.688 [2024-12-06T13:57:55.092Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:55.688 13:57:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 81930 00:18:55.688 13:57:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 81930 00:18:55.965 13:57:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:56.223 [2024-12-06 13:57:55.411968] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:56.223 13:57:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=82066 00:18:56.223 13:57:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:18:56.223 13:57:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 82066 /var/tmp/bdevperf.sock 00:18:56.223 13:57:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 82066 ']' 00:18:56.223 13:57:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:56.223 13:57:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:56.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:56.223 13:57:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:56.223 13:57:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:56.223 13:57:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:56.223 [2024-12-06 13:57:55.498818] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:18:56.223 [2024-12-06 13:57:55.498985] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82066 ] 00:18:56.489 [2024-12-06 13:57:55.651813] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:56.489 [2024-12-06 13:57:55.717974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:56.489 [2024-12-06 13:57:55.776753] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:57.058 13:57:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:57.058 13:57:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:18:57.058 13:57:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:57.317 13:57:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:18:57.884 NVMe0n1 00:18:57.884 13:57:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=82091 00:18:57.884 13:57:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:57.885 13:57:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:18:57.885 Running I/O for 10 seconds... 00:18:58.822 13:57:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:59.085 7266.00 IOPS, 28.38 MiB/s [2024-12-06T13:57:58.489Z] [2024-12-06 13:57:58.250440] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.085 [2024-12-06 13:57:58.250506] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.085 [2024-12-06 13:57:58.250532] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.085 [2024-12-06 13:57:58.250540] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.085 [2024-12-06 13:57:58.250547] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.085 [2024-12-06 13:57:58.250555] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.085 [2024-12-06 13:57:58.250562] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.085 [2024-12-06 13:57:58.250570] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.085 [2024-12-06 13:57:58.250577] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.085 [2024-12-06 13:57:58.250584] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.085 [2024-12-06 13:57:58.250591] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.085 [2024-12-06 13:57:58.250598] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.085 [2024-12-06 13:57:58.250605] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.085 [2024-12-06 13:57:58.250612] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.085 [2024-12-06 13:57:58.250619] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.085 [2024-12-06 13:57:58.250626] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.085 [2024-12-06 13:57:58.250633] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.085 [2024-12-06 13:57:58.250640] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.085 [2024-12-06 13:57:58.250647] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.085 [2024-12-06 13:57:58.250654] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.085 [2024-12-06 13:57:58.250661] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.085 [2024-12-06 13:57:58.250668] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.085 [2024-12-06 13:57:58.250675] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.085 [2024-12-06 13:57:58.250682] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.085 [2024-12-06 13:57:58.250688] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.085 [2024-12-06 13:57:58.250695] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.085 [2024-12-06 13:57:58.250702] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.085 [2024-12-06 13:57:58.250709] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.085 [2024-12-06 13:57:58.250715] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.086 [2024-12-06 13:57:58.250722] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.086 [2024-12-06 13:57:58.250730] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.086 [2024-12-06 13:57:58.250736] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.086 [2024-12-06 13:57:58.250759] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.086 [2024-12-06 13:57:58.250782] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.086 [2024-12-06 13:57:58.250808] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.086 [2024-12-06 13:57:58.250817] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.086 [2024-12-06 13:57:58.250825] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.086 [2024-12-06 13:57:58.250834] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.086 [2024-12-06 13:57:58.250843] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.086 [2024-12-06 13:57:58.250852] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.086 [2024-12-06 13:57:58.250860] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.086 [2024-12-06 13:57:58.250867] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.086 [2024-12-06 13:57:58.250891] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.086 [2024-12-06 13:57:58.250905] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.086 [2024-12-06 13:57:58.250913] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.086 [2024-12-06 13:57:58.250921] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.086 [2024-12-06 13:57:58.250928] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.086 [2024-12-06 13:57:58.250936] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.086 [2024-12-06 13:57:58.250943] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.086 [2024-12-06 13:57:58.250950] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.086 [2024-12-06 13:57:58.250957] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.086 [2024-12-06 13:57:58.250965] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.086 [2024-12-06 13:57:58.250972] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.086 [2024-12-06 13:57:58.250979] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.086 [2024-12-06 13:57:58.250987] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.086 [2024-12-06 13:57:58.250994] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.086 [2024-12-06 13:57:58.251001] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.086 [2024-12-06 13:57:58.251009] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.086 [2024-12-06 13:57:58.251017] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.086 [2024-12-06 13:57:58.251024] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.086 [2024-12-06 13:57:58.251031] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.086 [2024-12-06 13:57:58.251039] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.086 [2024-12-06 13:57:58.251047] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.086 [2024-12-06 13:57:58.251054] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.086 [2024-12-06 13:57:58.251061] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.086 [2024-12-06 13:57:58.251068] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.086 [2024-12-06 13:57:58.251076] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.086 [2024-12-06 13:57:58.251083] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.086 [2024-12-06 13:57:58.251091] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.086 [2024-12-06 13:57:58.251098] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.086 [2024-12-06 13:57:58.251106] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.086 [2024-12-06 13:57:58.251113] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.086 [2024-12-06 13:57:58.251120] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.086 [2024-12-06 13:57:58.251127] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.086 [2024-12-06 13:57:58.251134] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.086 [2024-12-06 13:57:58.251141] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.086 [2024-12-06 13:57:58.251148] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.086 [2024-12-06 13:57:58.251155] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.086 [2024-12-06 13:57:58.251163] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.086 [2024-12-06 13:57:58.251170] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.086 [2024-12-06 13:57:58.251191] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.086 [2024-12-06 13:57:58.251199] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.086 [2024-12-06 13:57:58.251213] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.086 [2024-12-06 13:57:58.251220] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.086 [2024-12-06 13:57:58.251227] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.086 [2024-12-06 13:57:58.251234] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.086 [2024-12-06 13:57:58.251241] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.086 [2024-12-06 13:57:58.251248] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.086 [2024-12-06 13:57:58.251255] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.086 [2024-12-06 13:57:58.251262] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.086 [2024-12-06 13:57:58.251269] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.086 [2024-12-06 13:57:58.251291] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.086 [2024-12-06 13:57:58.251307] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.086 [2024-12-06 13:57:58.251334] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.086 [2024-12-06 13:57:58.251349] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.086 [2024-12-06 13:57:58.251358] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.086 [2024-12-06 13:57:58.251366] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.086 [2024-12-06 13:57:58.251374] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.086 [2024-12-06 13:57:58.251382] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.086 [2024-12-06 13:57:58.251389] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.086 [2024-12-06 13:57:58.251397] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.086 [2024-12-06 13:57:58.251405] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.086 [2024-12-06 13:57:58.251413] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.086 [2024-12-06 13:57:58.251420] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.086 [2024-12-06 13:57:58.251428] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.086 [2024-12-06 13:57:58.251436] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.086 [2024-12-06 13:57:58.251444] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.086 [2024-12-06 13:57:58.251452] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.086 [2024-12-06 13:57:58.251467] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.086 [2024-12-06 13:57:58.251476] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.086 [2024-12-06 13:57:58.251484] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.086 [2024-12-06 13:57:58.251492] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.086 [2024-12-06 13:57:58.251500] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.086 [2024-12-06 13:57:58.251508] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.087 [2024-12-06 13:57:58.251516] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.087 [2024-12-06 13:57:58.251523] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8247c0 is same with the state(6) to be set 00:18:59.087 [2024-12-06 13:57:58.251582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:63440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.087 [2024-12-06 13:57:58.251612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.087 [2024-12-06 13:57:58.251634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:63448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.087 [2024-12-06 13:57:58.251645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.087 [2024-12-06 13:57:58.251657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:63456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.087 [2024-12-06 13:57:58.251667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.087 [2024-12-06 13:57:58.251678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:63464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.087 [2024-12-06 13:57:58.251687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.087 [2024-12-06 13:57:58.251699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:63472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.087 [2024-12-06 13:57:58.251708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.087 [2024-12-06 13:57:58.251719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:63480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.087 [2024-12-06 13:57:58.251728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.087 [2024-12-06 13:57:58.251741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:63488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.087 [2024-12-06 13:57:58.251764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.087 [2024-12-06 13:57:58.251775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:63496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.087 [2024-12-06 13:57:58.251784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.087 [2024-12-06 13:57:58.251794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:63504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.087 [2024-12-06 13:57:58.251803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.087 [2024-12-06 13:57:58.251814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:63512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.087 [2024-12-06 13:57:58.251823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.087 [2024-12-06 13:57:58.251848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:63520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.087 [2024-12-06 13:57:58.251872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.087 [2024-12-06 13:57:58.251882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:63528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.087 [2024-12-06 13:57:58.251891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.087 [2024-12-06 13:57:58.251901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:63536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.087 [2024-12-06 13:57:58.251909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.087 [2024-12-06 13:57:58.251919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:63544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.087 [2024-12-06 13:57:58.251928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.087 [2024-12-06 13:57:58.251938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:63552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.087 [2024-12-06 13:57:58.251946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.087 [2024-12-06 13:57:58.251956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:63560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.087 [2024-12-06 13:57:58.251965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.087 [2024-12-06 13:57:58.251975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:63568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.087 [2024-12-06 13:57:58.251993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.087 [2024-12-06 13:57:58.252004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:63576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.087 [2024-12-06 13:57:58.252013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.087 [2024-12-06 13:57:58.252024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:63584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.087 [2024-12-06 13:57:58.252032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.087 [2024-12-06 13:57:58.252042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:63592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.087 [2024-12-06 13:57:58.252051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.087 [2024-12-06 13:57:58.252061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:63600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.087 [2024-12-06 13:57:58.252070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.087 [2024-12-06 13:57:58.252080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:63608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.087 [2024-12-06 13:57:58.252089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.087 [2024-12-06 13:57:58.252099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:63616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.087 [2024-12-06 13:57:58.252108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.087 [2024-12-06 13:57:58.252118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:63624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.087 [2024-12-06 13:57:58.252126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.087 [2024-12-06 13:57:58.252136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:63632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.087 [2024-12-06 13:57:58.252145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.087 [2024-12-06 13:57:58.252155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:63640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.087 [2024-12-06 13:57:58.252176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.087 [2024-12-06 13:57:58.252204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:63648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.087 [2024-12-06 13:57:58.252213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.087 [2024-12-06 13:57:58.252223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:63656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.087 [2024-12-06 13:57:58.252232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.087 [2024-12-06 13:57:58.252243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:63664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.087 [2024-12-06 13:57:58.252252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.087 [2024-12-06 13:57:58.252278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:63672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.087 [2024-12-06 13:57:58.252287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.087 [2024-12-06 13:57:58.252298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:63680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.087 [2024-12-06 13:57:58.252307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.087 [2024-12-06 13:57:58.252318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:63688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.087 [2024-12-06 13:57:58.252327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.087 [2024-12-06 13:57:58.252338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:63696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.087 [2024-12-06 13:57:58.252355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.087 [2024-12-06 13:57:58.252366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:63704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.087 [2024-12-06 13:57:58.252375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.087 [2024-12-06 13:57:58.252386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:63712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.087 [2024-12-06 13:57:58.252395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.087 [2024-12-06 13:57:58.252406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:63720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.087 [2024-12-06 13:57:58.252415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.087 [2024-12-06 13:57:58.252426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:63728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.087 [2024-12-06 13:57:58.252435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.087 [2024-12-06 13:57:58.252446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:63736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.087 [2024-12-06 13:57:58.252454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.087 [2024-12-06 13:57:58.252465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:63744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.088 [2024-12-06 13:57:58.252474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.088 [2024-12-06 13:57:58.252484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:63752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.088 [2024-12-06 13:57:58.252493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.088 [2024-12-06 13:57:58.252504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:63760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.088 [2024-12-06 13:57:58.252513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.088 [2024-12-06 13:57:58.252554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:63768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.088 [2024-12-06 13:57:58.252563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.088 [2024-12-06 13:57:58.252573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:63776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.088 [2024-12-06 13:57:58.252583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.088 [2024-12-06 13:57:58.252594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:63784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.088 [2024-12-06 13:57:58.252602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.088 [2024-12-06 13:57:58.252613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:63792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.088 [2024-12-06 13:57:58.252622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.088 [2024-12-06 13:57:58.252633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:63800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.088 [2024-12-06 13:57:58.252642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.088 [2024-12-06 13:57:58.252653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:63808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.088 [2024-12-06 13:57:58.252663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.088 [2024-12-06 13:57:58.252674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:63816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.088 [2024-12-06 13:57:58.252683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.088 [2024-12-06 13:57:58.252710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:63824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.088 [2024-12-06 13:57:58.252724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.088 [2024-12-06 13:57:58.252736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:63832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.088 [2024-12-06 13:57:58.252745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.088 [2024-12-06 13:57:58.252756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:63840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.088 [2024-12-06 13:57:58.252764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.088 [2024-12-06 13:57:58.252775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:63848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.088 [2024-12-06 13:57:58.252784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.088 [2024-12-06 13:57:58.252795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:63856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.088 [2024-12-06 13:57:58.252804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.088 [2024-12-06 13:57:58.252815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:63864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.088 [2024-12-06 13:57:58.252823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.088 [2024-12-06 13:57:58.252834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:63872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.088 [2024-12-06 13:57:58.252842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.088 [2024-12-06 13:57:58.252853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:63880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.088 [2024-12-06 13:57:58.252862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.088 [2024-12-06 13:57:58.252872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:63888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.088 [2024-12-06 13:57:58.252889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.088 [2024-12-06 13:57:58.252914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:63896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.088 [2024-12-06 13:57:58.252923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.088 [2024-12-06 13:57:58.252933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:63904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.088 [2024-12-06 13:57:58.252942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.088 [2024-12-06 13:57:58.252952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:63912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.088 [2024-12-06 13:57:58.252961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.088 [2024-12-06 13:57:58.252971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:63920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.088 [2024-12-06 13:57:58.252980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.088 [2024-12-06 13:57:58.252990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:63928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.088 [2024-12-06 13:57:58.252999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.088 [2024-12-06 13:57:58.253009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:63936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.088 [2024-12-06 13:57:58.253017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.088 [2024-12-06 13:57:58.253028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:63944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.088 [2024-12-06 13:57:58.253036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.088 [2024-12-06 13:57:58.253047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:63952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.088 [2024-12-06 13:57:58.253061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.088 [2024-12-06 13:57:58.253072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:63960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.088 [2024-12-06 13:57:58.253081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.088 [2024-12-06 13:57:58.253092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:63968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.088 [2024-12-06 13:57:58.253100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.088 [2024-12-06 13:57:58.253111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:63976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.088 [2024-12-06 13:57:58.253119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.088 [2024-12-06 13:57:58.253130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:63984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.088 [2024-12-06 13:57:58.253138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.088 [2024-12-06 13:57:58.253148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:63992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.088 [2024-12-06 13:57:58.253157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.088 [2024-12-06 13:57:58.253168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:64000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.088 [2024-12-06 13:57:58.253177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.088 [2024-12-06 13:57:58.253196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:64008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.088 [2024-12-06 13:57:58.253206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.088 [2024-12-06 13:57:58.253231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:64016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.088 [2024-12-06 13:57:58.253255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.088 [2024-12-06 13:57:58.253266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:64024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.088 [2024-12-06 13:57:58.253274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.088 [2024-12-06 13:57:58.253285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:64032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.088 [2024-12-06 13:57:58.253293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.088 [2024-12-06 13:57:58.253304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:64040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.088 [2024-12-06 13:57:58.253313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.088 [2024-12-06 13:57:58.253323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:64048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.088 [2024-12-06 13:57:58.253332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.088 [2024-12-06 13:57:58.253342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:64056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.088 [2024-12-06 13:57:58.253351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.088 [2024-12-06 13:57:58.253361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:64064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.089 [2024-12-06 13:57:58.253370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.089 [2024-12-06 13:57:58.253380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:64072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.089 [2024-12-06 13:57:58.253389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.089 [2024-12-06 13:57:58.253399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:64080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.089 [2024-12-06 13:57:58.253414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.089 [2024-12-06 13:57:58.253424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:64088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.089 [2024-12-06 13:57:58.253433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.089 [2024-12-06 13:57:58.253443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:64096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.089 [2024-12-06 13:57:58.253452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.089 [2024-12-06 13:57:58.253463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:64104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.089 [2024-12-06 13:57:58.253471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.089 [2024-12-06 13:57:58.253481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:64112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.089 [2024-12-06 13:57:58.253490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.089 [2024-12-06 13:57:58.253500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:64120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.089 [2024-12-06 13:57:58.253508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.089 [2024-12-06 13:57:58.253544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:64128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.089 [2024-12-06 13:57:58.253555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.089 [2024-12-06 13:57:58.253566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.089 [2024-12-06 13:57:58.253576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.089 [2024-12-06 13:57:58.253587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:64144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.089 [2024-12-06 13:57:58.253597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.089 [2024-12-06 13:57:58.253608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:64152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.089 [2024-12-06 13:57:58.253617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.089 [2024-12-06 13:57:58.253628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:64160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.089 [2024-12-06 13:57:58.253637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.089 [2024-12-06 13:57:58.253649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:64168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.089 [2024-12-06 13:57:58.253658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.089 [2024-12-06 13:57:58.253669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:64176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.089 [2024-12-06 13:57:58.253693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.089 [2024-12-06 13:57:58.253703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:64184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.089 [2024-12-06 13:57:58.253712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.089 [2024-12-06 13:57:58.253723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:64192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.089 [2024-12-06 13:57:58.253732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.089 [2024-12-06 13:57:58.253742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:64200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.089 [2024-12-06 13:57:58.253751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.089 [2024-12-06 13:57:58.253761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:64208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.089 [2024-12-06 13:57:58.253775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.089 [2024-12-06 13:57:58.253786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:64216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.089 [2024-12-06 13:57:58.253795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.089 [2024-12-06 13:57:58.253806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:64224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.089 [2024-12-06 13:57:58.253815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.089 [2024-12-06 13:57:58.253826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:64232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.089 [2024-12-06 13:57:58.253835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.089 [2024-12-06 13:57:58.253845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:64240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.089 [2024-12-06 13:57:58.253877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.089 [2024-12-06 13:57:58.253898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:64248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.089 [2024-12-06 13:57:58.253922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.089 [2024-12-06 13:57:58.253934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:64256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.089 [2024-12-06 13:57:58.253942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.089 [2024-12-06 13:57:58.253952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:64264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.089 [2024-12-06 13:57:58.253960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.089 [2024-12-06 13:57:58.253970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:64272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.089 [2024-12-06 13:57:58.253979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.089 [2024-12-06 13:57:58.253989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:64280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.089 [2024-12-06 13:57:58.253998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.089 [2024-12-06 13:57:58.254008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:64288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.089 [2024-12-06 13:57:58.254017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.089 [2024-12-06 13:57:58.254027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:64296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.089 [2024-12-06 13:57:58.254036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.089 [2024-12-06 13:57:58.254046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:64304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.089 [2024-12-06 13:57:58.254054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.089 [2024-12-06 13:57:58.254065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:64312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.089 [2024-12-06 13:57:58.254073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.089 [2024-12-06 13:57:58.254083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:64328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.089 [2024-12-06 13:57:58.254091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.089 [2024-12-06 13:57:58.254102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:64336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.089 [2024-12-06 13:57:58.254110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.089 [2024-12-06 13:57:58.254120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:64344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.089 [2024-12-06 13:57:58.254133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.089 [2024-12-06 13:57:58.254143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:64352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.089 [2024-12-06 13:57:58.254152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.089 [2024-12-06 13:57:58.254161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:64360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.089 [2024-12-06 13:57:58.254170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.089 [2024-12-06 13:57:58.254180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:64368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.089 [2024-12-06 13:57:58.254188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.089 [2024-12-06 13:57:58.254215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:64376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.089 [2024-12-06 13:57:58.254231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.089 [2024-12-06 13:57:58.254257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.089 [2024-12-06 13:57:58.254266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.089 [2024-12-06 13:57:58.254276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:64384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.090 [2024-12-06 13:57:58.254285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.090 [2024-12-06 13:57:58.254295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:64392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.090 [2024-12-06 13:57:58.254304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.090 [2024-12-06 13:57:58.254315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:64400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.090 [2024-12-06 13:57:58.254323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.090 [2024-12-06 13:57:58.254334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:64408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.090 [2024-12-06 13:57:58.254343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.090 [2024-12-06 13:57:58.254353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:64416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.090 [2024-12-06 13:57:58.254361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.090 [2024-12-06 13:57:58.254372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:64424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.090 [2024-12-06 13:57:58.254380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.090 [2024-12-06 13:57:58.254390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:64432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.090 [2024-12-06 13:57:58.254399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.090 [2024-12-06 13:57:58.254409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:64440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.090 [2024-12-06 13:57:58.254418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.090 [2024-12-06 13:57:58.254428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:64448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:59.090 [2024-12-06 13:57:58.254436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.090 [2024-12-06 13:57:58.254446] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e72e0 is same with the state(6) to be set 00:18:59.090 [2024-12-06 13:57:58.254457] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:59.090 [2024-12-06 13:57:58.254465] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:59.090 [2024-12-06 13:57:58.254478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64456 len:8 PRP1 0x0 PRP2 0x0 00:18:59.090 [2024-12-06 13:57:58.254486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.090 [2024-12-06 13:57:58.254846] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:18:59.090 [2024-12-06 13:57:58.254968] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x979070 (9): Bad file descriptor 00:18:59.090 [2024-12-06 13:57:58.255080] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:59.090 [2024-12-06 13:57:58.255100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x979070 with addr=10.0.0.3, port=4420 00:18:59.090 [2024-12-06 13:57:58.255110] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x979070 is same with the state(6) to be set 00:18:59.090 [2024-12-06 13:57:58.255142] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x979070 (9): Bad file descriptor 00:18:59.090 [2024-12-06 13:57:58.255163] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:18:59.090 [2024-12-06 13:57:58.255188] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:18:59.090 [2024-12-06 13:57:58.255199] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:18:59.090 [2024-12-06 13:57:58.255210] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:18:59.090 [2024-12-06 13:57:58.255221] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:18:59.090 13:57:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:19:00.029 3965.00 IOPS, 15.49 MiB/s [2024-12-06T13:57:59.433Z] [2024-12-06 13:57:59.255363] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:00.029 [2024-12-06 13:57:59.255406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x979070 with addr=10.0.0.3, port=4420 00:19:00.029 [2024-12-06 13:57:59.255419] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x979070 is same with the state(6) to be set 00:19:00.029 [2024-12-06 13:57:59.255438] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x979070 (9): Bad file descriptor 00:19:00.029 [2024-12-06 13:57:59.255454] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:19:00.029 [2024-12-06 13:57:59.255462] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:19:00.029 [2024-12-06 13:57:59.255472] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:19:00.029 [2024-12-06 13:57:59.255482] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:19:00.029 [2024-12-06 13:57:59.255491] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:19:00.029 13:57:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:00.288 [2024-12-06 13:57:59.490090] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:00.288 13:57:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 82091 00:19:01.114 2643.33 IOPS, 10.33 MiB/s [2024-12-06T13:58:00.518Z] [2024-12-06 13:58:00.271122] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:19:02.990 1982.50 IOPS, 7.74 MiB/s [2024-12-06T13:58:03.332Z] 3361.40 IOPS, 13.13 MiB/s [2024-12-06T13:58:04.269Z] 4470.50 IOPS, 17.46 MiB/s [2024-12-06T13:58:05.204Z] 5287.86 IOPS, 20.66 MiB/s [2024-12-06T13:58:06.578Z] 5900.88 IOPS, 23.05 MiB/s [2024-12-06T13:58:07.514Z] 6384.78 IOPS, 24.94 MiB/s [2024-12-06T13:58:07.514Z] 6772.70 IOPS, 26.46 MiB/s 00:19:08.111 Latency(us) 00:19:08.111 [2024-12-06T13:58:07.515Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:08.111 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:08.111 Verification LBA range: start 0x0 length 0x4000 00:19:08.111 NVMe0n1 : 10.01 6778.64 26.48 0.00 0.00 18858.24 1213.91 3035150.89 00:19:08.111 [2024-12-06T13:58:07.515Z] =================================================================================================================== 00:19:08.111 [2024-12-06T13:58:07.515Z] Total : 6778.64 26.48 0.00 0.00 18858.24 1213.91 3035150.89 00:19:08.111 { 00:19:08.111 "results": [ 00:19:08.111 { 00:19:08.111 "job": "NVMe0n1", 00:19:08.111 "core_mask": "0x4", 00:19:08.111 "workload": "verify", 00:19:08.111 "status": "finished", 00:19:08.111 "verify_range": { 00:19:08.111 "start": 0, 00:19:08.111 "length": 16384 00:19:08.111 }, 00:19:08.111 "queue_depth": 128, 00:19:08.111 "io_size": 4096, 00:19:08.111 "runtime": 10.01012, 00:19:08.111 "iops": 6778.6400163035005, 00:19:08.111 "mibps": 26.47906256368555, 00:19:08.111 "io_failed": 0, 00:19:08.111 "io_timeout": 0, 00:19:08.111 "avg_latency_us": 18858.240053161488, 00:19:08.111 "min_latency_us": 1213.9054545454546, 00:19:08.111 "max_latency_us": 3035150.8945454545 00:19:08.111 } 00:19:08.111 ], 00:19:08.111 "core_count": 1 00:19:08.111 } 00:19:08.111 13:58:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=82196 00:19:08.111 13:58:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:08.111 13:58:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:19:08.111 Running I/O for 10 seconds... 00:19:09.049 13:58:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:09.049 9304.00 IOPS, 36.34 MiB/s [2024-12-06T13:58:08.453Z] [2024-12-06 13:58:08.438896] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8257a0 is same with the state(6) to be set 00:19:09.049 [2024-12-06 13:58:08.439370] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8257a0 is same with the state(6) to be set 00:19:09.049 [2024-12-06 13:58:08.439428] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8257a0 is same with the state(6) to be set 00:19:09.049 [2024-12-06 13:58:08.439523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:85344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.049 [2024-12-06 13:58:08.439553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.049 [2024-12-06 13:58:08.439574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:85352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.049 [2024-12-06 13:58:08.439585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.049 [2024-12-06 13:58:08.439596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:85360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.049 [2024-12-06 13:58:08.439606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.049 [2024-12-06 13:58:08.439633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:85368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.049 [2024-12-06 13:58:08.439642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.049 [2024-12-06 13:58:08.439652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:85696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.049 [2024-12-06 13:58:08.439661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.049 [2024-12-06 13:58:08.439671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:85704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.049 [2024-12-06 13:58:08.439680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.049 [2024-12-06 13:58:08.439690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:85712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.049 [2024-12-06 13:58:08.439699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.049 [2024-12-06 13:58:08.439709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.049 [2024-12-06 13:58:08.439717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.049 [2024-12-06 13:58:08.439728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:85728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.049 [2024-12-06 13:58:08.439737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.049 [2024-12-06 13:58:08.439747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:85736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.049 [2024-12-06 13:58:08.439756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.049 [2024-12-06 13:58:08.439773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:85744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.049 [2024-12-06 13:58:08.439788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.050 [2024-12-06 13:58:08.439799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:85752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.050 [2024-12-06 13:58:08.439808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.050 [2024-12-06 13:58:08.439825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.050 [2024-12-06 13:58:08.439834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.050 [2024-12-06 13:58:08.439859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:85768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.050 [2024-12-06 13:58:08.439883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.050 [2024-12-06 13:58:08.439893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:85776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.050 [2024-12-06 13:58:08.439900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.050 [2024-12-06 13:58:08.439910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:85784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.050 [2024-12-06 13:58:08.439918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.050 [2024-12-06 13:58:08.439928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:85792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.050 [2024-12-06 13:58:08.439938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.050 [2024-12-06 13:58:08.439948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:85800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.050 [2024-12-06 13:58:08.439957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.050 [2024-12-06 13:58:08.439966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:85808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.050 [2024-12-06 13:58:08.439974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.050 [2024-12-06 13:58:08.439984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:85816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.050 [2024-12-06 13:58:08.439992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.050 [2024-12-06 13:58:08.440002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:85376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.050 [2024-12-06 13:58:08.440010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.050 [2024-12-06 13:58:08.440020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:85384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.050 [2024-12-06 13:58:08.440028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.050 [2024-12-06 13:58:08.440038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:85392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.050 [2024-12-06 13:58:08.440046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.050 [2024-12-06 13:58:08.440055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:85400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.050 [2024-12-06 13:58:08.440063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.050 [2024-12-06 13:58:08.440072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:85408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.050 [2024-12-06 13:58:08.440081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.050 [2024-12-06 13:58:08.440091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:85416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.050 [2024-12-06 13:58:08.440099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.050 [2024-12-06 13:58:08.440124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:85424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.050 [2024-12-06 13:58:08.440133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.050 [2024-12-06 13:58:08.440155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:85432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.050 [2024-12-06 13:58:08.440166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.050 [2024-12-06 13:58:08.440177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:85440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.050 [2024-12-06 13:58:08.440185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.050 [2024-12-06 13:58:08.440195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:85448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.050 [2024-12-06 13:58:08.440204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.050 [2024-12-06 13:58:08.440214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:85456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.050 [2024-12-06 13:58:08.440223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.050 [2024-12-06 13:58:08.440233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:85464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.050 [2024-12-06 13:58:08.440241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.050 [2024-12-06 13:58:08.440251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:85472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.050 [2024-12-06 13:58:08.440262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.050 [2024-12-06 13:58:08.440272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:85480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.050 [2024-12-06 13:58:08.440281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.050 [2024-12-06 13:58:08.440291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.050 [2024-12-06 13:58:08.440299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.050 [2024-12-06 13:58:08.440309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:85496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.050 [2024-12-06 13:58:08.440317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.050 [2024-12-06 13:58:08.440327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:85824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.050 [2024-12-06 13:58:08.440335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.050 [2024-12-06 13:58:08.440344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:85832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.050 [2024-12-06 13:58:08.440352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.050 [2024-12-06 13:58:08.440362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:85840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.050 [2024-12-06 13:58:08.440370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.050 [2024-12-06 13:58:08.440380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:85848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.050 [2024-12-06 13:58:08.440388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.050 [2024-12-06 13:58:08.440398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:85856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.050 [2024-12-06 13:58:08.440405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.050 [2024-12-06 13:58:08.440415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:85864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.050 [2024-12-06 13:58:08.440423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.050 [2024-12-06 13:58:08.440433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:85872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.050 [2024-12-06 13:58:08.440441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.050 [2024-12-06 13:58:08.440450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:85880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.050 [2024-12-06 13:58:08.440458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.050 [2024-12-06 13:58:08.440468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:85504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.050 [2024-12-06 13:58:08.440476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.050 [2024-12-06 13:58:08.440502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:85512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.050 [2024-12-06 13:58:08.440526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.050 [2024-12-06 13:58:08.440537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:85520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.050 [2024-12-06 13:58:08.440546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.050 [2024-12-06 13:58:08.440557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:85528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.050 [2024-12-06 13:58:08.440566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.050 [2024-12-06 13:58:08.440576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:85536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.050 [2024-12-06 13:58:08.440585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.050 [2024-12-06 13:58:08.440596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:85544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.050 [2024-12-06 13:58:08.440620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.051 [2024-12-06 13:58:08.440631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:85552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.051 [2024-12-06 13:58:08.440640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.051 [2024-12-06 13:58:08.440652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:85560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.051 [2024-12-06 13:58:08.440661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.051 [2024-12-06 13:58:08.440672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:85888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.051 [2024-12-06 13:58:08.440681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.051 [2024-12-06 13:58:08.440692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:85896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.051 [2024-12-06 13:58:08.440701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.051 [2024-12-06 13:58:08.440712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:85904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.051 [2024-12-06 13:58:08.440721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.051 [2024-12-06 13:58:08.440731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:85912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.051 [2024-12-06 13:58:08.440740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.051 [2024-12-06 13:58:08.440751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:85920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.051 [2024-12-06 13:58:08.440760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.051 [2024-12-06 13:58:08.440771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:85928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.051 [2024-12-06 13:58:08.440780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.051 [2024-12-06 13:58:08.440791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:85936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.051 [2024-12-06 13:58:08.440799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.051 [2024-12-06 13:58:08.440809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:85944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.051 [2024-12-06 13:58:08.440818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.051 [2024-12-06 13:58:08.440829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:85952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.051 [2024-12-06 13:58:08.440840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.051 [2024-12-06 13:58:08.440850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:85960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.051 [2024-12-06 13:58:08.440859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.051 [2024-12-06 13:58:08.440870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:85968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.051 [2024-12-06 13:58:08.440879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.051 [2024-12-06 13:58:08.440890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:85976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.051 [2024-12-06 13:58:08.440899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.051 [2024-12-06 13:58:08.440910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.051 [2024-12-06 13:58:08.440920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.051 [2024-12-06 13:58:08.440930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:85992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.051 [2024-12-06 13:58:08.440939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.051 [2024-12-06 13:58:08.440950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:86000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.051 [2024-12-06 13:58:08.440959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.051 [2024-12-06 13:58:08.440969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:86008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.051 [2024-12-06 13:58:08.440978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.051 [2024-12-06 13:58:08.440989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:86016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.051 [2024-12-06 13:58:08.440997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.051 [2024-12-06 13:58:08.441008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:86024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.051 [2024-12-06 13:58:08.441017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.051 [2024-12-06 13:58:08.441027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:86032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.051 [2024-12-06 13:58:08.441036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.051 [2024-12-06 13:58:08.441047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:86040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.051 [2024-12-06 13:58:08.441055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.051 [2024-12-06 13:58:08.441066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:86048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.051 [2024-12-06 13:58:08.441074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.051 [2024-12-06 13:58:08.441085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:86056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.051 [2024-12-06 13:58:08.441094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.051 [2024-12-06 13:58:08.441110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:86064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.051 [2024-12-06 13:58:08.441119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.051 [2024-12-06 13:58:08.441130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:86072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.051 [2024-12-06 13:58:08.441138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.051 [2024-12-06 13:58:08.441161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:85568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.051 [2024-12-06 13:58:08.441171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.051 [2024-12-06 13:58:08.441182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:85576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.051 [2024-12-06 13:58:08.441191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.051 [2024-12-06 13:58:08.441202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:85584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.051 [2024-12-06 13:58:08.441212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.051 [2024-12-06 13:58:08.441223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:85592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.051 [2024-12-06 13:58:08.441232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.051 [2024-12-06 13:58:08.441243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:85600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.051 [2024-12-06 13:58:08.441251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.051 [2024-12-06 13:58:08.441262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:85608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.051 [2024-12-06 13:58:08.441271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.051 [2024-12-06 13:58:08.441282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:85616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.051 [2024-12-06 13:58:08.441290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.051 [2024-12-06 13:58:08.441301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:85624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.051 [2024-12-06 13:58:08.441310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.051 [2024-12-06 13:58:08.441320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:86080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.051 [2024-12-06 13:58:08.441329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.051 [2024-12-06 13:58:08.441340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:86088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.051 [2024-12-06 13:58:08.441348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.051 [2024-12-06 13:58:08.441359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:86096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.051 [2024-12-06 13:58:08.441370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.051 [2024-12-06 13:58:08.441381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:86104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.051 [2024-12-06 13:58:08.441389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.051 [2024-12-06 13:58:08.441399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:86112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.051 [2024-12-06 13:58:08.441408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.051 [2024-12-06 13:58:08.441419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:86120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.052 [2024-12-06 13:58:08.441428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.052 [2024-12-06 13:58:08.441438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:86128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.052 [2024-12-06 13:58:08.441447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.052 [2024-12-06 13:58:08.441458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:86136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.052 [2024-12-06 13:58:08.441466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.052 [2024-12-06 13:58:08.441477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:86144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.052 [2024-12-06 13:58:08.441486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.052 [2024-12-06 13:58:08.441496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:86152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.052 [2024-12-06 13:58:08.441505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.052 [2024-12-06 13:58:08.441517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:86160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.052 [2024-12-06 13:58:08.441526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.052 [2024-12-06 13:58:08.441537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:86168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.052 [2024-12-06 13:58:08.441545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.052 [2024-12-06 13:58:08.441556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:86176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.052 [2024-12-06 13:58:08.441565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.052 [2024-12-06 13:58:08.441575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:86184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.052 [2024-12-06 13:58:08.441584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.052 [2024-12-06 13:58:08.441595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:86192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.052 [2024-12-06 13:58:08.441604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.052 [2024-12-06 13:58:08.441614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:86200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.052 [2024-12-06 13:58:08.441623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.052 [2024-12-06 13:58:08.441634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:85632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.052 [2024-12-06 13:58:08.441642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.052 [2024-12-06 13:58:08.441653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:85640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.052 [2024-12-06 13:58:08.441662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.052 [2024-12-06 13:58:08.441673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:85648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.052 [2024-12-06 13:58:08.441682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.052 [2024-12-06 13:58:08.441692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:85656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.052 [2024-12-06 13:58:08.441701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.052 [2024-12-06 13:58:08.441711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:85664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.052 [2024-12-06 13:58:08.441720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.052 [2024-12-06 13:58:08.441731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:85672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.052 [2024-12-06 13:58:08.441740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.052 [2024-12-06 13:58:08.441751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:85680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.052 [2024-12-06 13:58:08.441760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.052 [2024-12-06 13:58:08.441770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:85688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.052 [2024-12-06 13:58:08.441779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.052 [2024-12-06 13:58:08.441790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:86208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.052 [2024-12-06 13:58:08.441798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.052 [2024-12-06 13:58:08.441808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:86216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.052 [2024-12-06 13:58:08.441817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.052 [2024-12-06 13:58:08.441828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.052 [2024-12-06 13:58:08.441836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.052 [2024-12-06 13:58:08.441847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:86232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.052 [2024-12-06 13:58:08.441856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.052 [2024-12-06 13:58:08.441867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:86240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.052 [2024-12-06 13:58:08.441875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.052 [2024-12-06 13:58:08.441886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:86248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.052 [2024-12-06 13:58:08.441895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.052 [2024-12-06 13:58:08.441906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:86256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.052 [2024-12-06 13:58:08.441915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.052 [2024-12-06 13:58:08.441926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:86264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.052 [2024-12-06 13:58:08.441935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.052 [2024-12-06 13:58:08.441946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:86272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.052 [2024-12-06 13:58:08.441955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.052 [2024-12-06 13:58:08.441965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:86280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.052 [2024-12-06 13:58:08.441974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.052 [2024-12-06 13:58:08.441985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:86288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.052 [2024-12-06 13:58:08.441994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.052 [2024-12-06 13:58:08.442004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.052 [2024-12-06 13:58:08.442013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.052 [2024-12-06 13:58:08.442023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:86304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.052 [2024-12-06 13:58:08.442032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.052 [2024-12-06 13:58:08.442042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:86312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.052 [2024-12-06 13:58:08.442051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.052 [2024-12-06 13:58:08.442062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:86320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.052 [2024-12-06 13:58:08.442071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.052 [2024-12-06 13:58:08.442081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:86328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.052 [2024-12-06 13:58:08.442090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.052 [2024-12-06 13:58:08.442110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:86336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.052 [2024-12-06 13:58:08.442121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.052 [2024-12-06 13:58:08.442132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:86344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.052 [2024-12-06 13:58:08.442140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.052 [2024-12-06 13:58:08.442152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:86352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.052 [2024-12-06 13:58:08.442161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.052 [2024-12-06 13:58:08.442171] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa1a0 is same with the state(6) to be set 00:19:09.052 [2024-12-06 13:58:08.442183] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:09.052 [2024-12-06 13:58:08.442191] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:09.053 [2024-12-06 13:58:08.442199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86360 len:8 PRP1 0x0 PRP2 0x0 00:19:09.053 [2024-12-06 13:58:08.442208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.053 [2024-12-06 13:58:08.442535] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:19:09.053 [2024-12-06 13:58:08.442624] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x979070 (9): Bad file descriptor 00:19:09.053 [2024-12-06 13:58:08.442751] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:09.053 [2024-12-06 13:58:08.442775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x979070 with addr=10.0.0.3, port=4420 00:19:09.053 [2024-12-06 13:58:08.442787] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x979070 is same with the state(6) to be set 00:19:09.053 [2024-12-06 13:58:08.442819] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x979070 (9): Bad file descriptor 00:19:09.053 [2024-12-06 13:58:08.442849] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:19:09.053 [2024-12-06 13:58:08.442863] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:19:09.053 [2024-12-06 13:58:08.442874] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:19:09.053 [2024-12-06 13:58:08.442885] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:19:09.053 [2024-12-06 13:58:08.442896] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:19:09.311 13:58:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:19:10.246 5334.00 IOPS, 20.84 MiB/s [2024-12-06T13:58:09.650Z] [2024-12-06 13:58:09.443002] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:10.246 [2024-12-06 13:58:09.443062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x979070 with addr=10.0.0.3, port=4420 00:19:10.246 [2024-12-06 13:58:09.443077] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x979070 is same with the state(6) to be set 00:19:10.246 [2024-12-06 13:58:09.443097] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x979070 (9): Bad file descriptor 00:19:10.246 [2024-12-06 13:58:09.443147] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:19:10.246 [2024-12-06 13:58:09.443158] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:19:10.246 [2024-12-06 13:58:09.443168] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:19:10.246 [2024-12-06 13:58:09.443179] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:19:10.246 [2024-12-06 13:58:09.443189] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:19:11.182 3556.00 IOPS, 13.89 MiB/s [2024-12-06T13:58:10.586Z] [2024-12-06 13:58:10.443268] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:11.182 [2024-12-06 13:58:10.443515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x979070 with addr=10.0.0.3, port=4420 00:19:11.182 [2024-12-06 13:58:10.443537] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x979070 is same with the state(6) to be set 00:19:11.182 [2024-12-06 13:58:10.443559] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x979070 (9): Bad file descriptor 00:19:11.182 [2024-12-06 13:58:10.443576] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:19:11.182 [2024-12-06 13:58:10.443585] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:19:11.182 [2024-12-06 13:58:10.443594] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:19:11.182 [2024-12-06 13:58:10.443604] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:19:11.182 [2024-12-06 13:58:10.443614] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:19:12.117 2667.00 IOPS, 10.42 MiB/s [2024-12-06T13:58:11.521Z] [2024-12-06 13:58:11.446348] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:12.117 [2024-12-06 13:58:11.446562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x979070 with addr=10.0.0.3, port=4420 00:19:12.117 [2024-12-06 13:58:11.446584] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x979070 is same with the state(6) to be set 00:19:12.118 [2024-12-06 13:58:11.446820] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x979070 (9): Bad file descriptor 00:19:12.118 [2024-12-06 13:58:11.447082] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:19:12.118 [2024-12-06 13:58:11.447093] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:19:12.118 [2024-12-06 13:58:11.447103] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:19:12.118 [2024-12-06 13:58:11.447124] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:19:12.118 [2024-12-06 13:58:11.447133] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:19:12.118 13:58:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:12.377 [2024-12-06 13:58:11.703201] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:12.377 13:58:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 82196 00:19:13.203 2133.60 IOPS, 8.33 MiB/s [2024-12-06T13:58:12.607Z] [2024-12-06 13:58:12.473588] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 4] Resetting controller successful. 00:19:15.078 3151.17 IOPS, 12.31 MiB/s [2024-12-06T13:58:15.444Z] 4174.00 IOPS, 16.30 MiB/s [2024-12-06T13:58:16.386Z] 5019.38 IOPS, 19.61 MiB/s [2024-12-06T13:58:17.320Z] 5674.89 IOPS, 22.17 MiB/s [2024-12-06T13:58:17.320Z] 6205.70 IOPS, 24.24 MiB/s 00:19:17.916 Latency(us) 00:19:17.916 [2024-12-06T13:58:17.320Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:17.916 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:17.916 Verification LBA range: start 0x0 length 0x4000 00:19:17.916 NVMe0n1 : 10.01 6210.21 24.26 4339.51 0.00 12109.19 618.12 3019898.88 00:19:17.916 [2024-12-06T13:58:17.320Z] =================================================================================================================== 00:19:17.916 [2024-12-06T13:58:17.320Z] Total : 6210.21 24.26 4339.51 0.00 12109.19 0.00 3019898.88 00:19:17.916 { 00:19:17.916 "results": [ 00:19:17.916 { 00:19:17.916 "job": "NVMe0n1", 00:19:17.916 "core_mask": "0x4", 00:19:17.916 "workload": "verify", 00:19:17.916 "status": "finished", 00:19:17.916 "verify_range": { 00:19:17.916 "start": 0, 00:19:17.916 "length": 16384 00:19:17.916 }, 00:19:17.916 "queue_depth": 128, 00:19:17.916 "io_size": 4096, 00:19:17.916 "runtime": 10.008043, 00:19:17.916 "iops": 6210.205132012323, 00:19:17.916 "mibps": 24.258613796923136, 00:19:17.916 "io_failed": 43430, 00:19:17.916 "io_timeout": 0, 00:19:17.916 "avg_latency_us": 12109.187495251428, 00:19:17.916 "min_latency_us": 618.1236363636364, 00:19:17.916 "max_latency_us": 3019898.88 00:19:17.916 } 00:19:17.916 ], 00:19:17.916 "core_count": 1 00:19:17.916 } 00:19:18.175 13:58:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 82066 00:19:18.175 13:58:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 82066 ']' 00:19:18.175 13:58:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 82066 00:19:18.175 13:58:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:19:18.175 13:58:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:18.175 13:58:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82066 00:19:18.175 killing process with pid 82066 00:19:18.175 Received shutdown signal, test time was about 10.000000 seconds 00:19:18.175 00:19:18.175 Latency(us) 00:19:18.175 [2024-12-06T13:58:17.579Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:18.175 [2024-12-06T13:58:17.579Z] =================================================================================================================== 00:19:18.175 [2024-12-06T13:58:17.579Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:18.175 13:58:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:18.175 13:58:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:18.175 13:58:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82066' 00:19:18.175 13:58:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 82066 00:19:18.175 13:58:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 82066 00:19:18.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:18.175 13:58:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=82310 00:19:18.175 13:58:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:19:18.175 13:58:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 82310 /var/tmp/bdevperf.sock 00:19:18.175 13:58:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 82310 ']' 00:19:18.175 13:58:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:18.175 13:58:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:18.175 13:58:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:18.175 13:58:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:18.175 13:58:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:18.433 [2024-12-06 13:58:17.608492] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:19:18.433 [2024-12-06 13:58:17.608785] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82310 ] 00:19:18.433 [2024-12-06 13:58:17.754338] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:18.433 [2024-12-06 13:58:17.794979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:18.691 [2024-12-06 13:58:17.845734] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:18.691 13:58:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:18.691 13:58:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:19:18.691 13:58:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=82313 00:19:18.691 13:58:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 82310 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:19:18.691 13:58:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:19:18.950 13:58:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:19:19.208 NVMe0n1 00:19:19.208 13:58:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=82360 00:19:19.208 13:58:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:19.208 13:58:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:19:19.466 Running I/O for 10 seconds... 00:19:20.406 13:58:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:20.406 17272.00 IOPS, 67.47 MiB/s [2024-12-06T13:58:19.810Z] [2024-12-06 13:58:19.787528] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833b50 is same with the state(6) to be set 00:19:20.406 [2024-12-06 13:58:19.787809] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833b50 is same with the state(6) to be set 00:19:20.406 [2024-12-06 13:58:19.787964] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833b50 is same with the state(6) to be set 00:19:20.406 [2024-12-06 13:58:19.788194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:20.406 [2024-12-06 13:58:19.788234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-12-06 13:58:19.788250] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833b50 is same with the state(6) to be set 00:19:20.406 [2024-12-06 13:58:19.788266] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833b50 is same with the state(6) to be set 00:19:20.406 [2024-12-06 13:58:19.788275] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833b50 is same with the state(6) to be set 00:19:20.406 [2024-12-06 13:58:19.788282] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833b50 is same with the state(6) to be set 00:19:20.406 [2024-12-06 13:58:19.788291] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833b50 is same with the state(6) to be set 00:19:20.407 [2024-12-06 13:58:19.788298] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833b50 is same with the state(6) to be set 00:19:20.407 [2024-12-06 13:58:19.788306] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833b50 is same with the state(6) to be set 00:19:20.407 [2024-12-06 13:58:19.788313] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833b50 is same with the state(6) to be set 00:19:20.407 [2024-12-06 13:58:19.788321] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833b50 is same with the state(6) to be set 00:19:20.407 [2024-12-06 13:58:19.788328] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833b50 is same with the state(6) to be set 00:19:20.407 [2024-12-06 13:58:19.788336] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833b50 is same with the state(6) to be set 00:19:20.407 [2024-12-06 13:58:19.788343] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833b50 is same with the state(6) to be set 00:19:20.407 [2024-12-06 13:58:19.788351] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833b50 is same with the state(6) to be set 00:19:20.407 [2024-12-06 13:58:19.788358] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833b50 is same with the state(6) to be set 00:19:20.407 [2024-12-06 13:58:19.788365] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833b50 is same with the state(6) to be set 00:19:20.407 [2024-12-06 13:58:19.788373] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833b50 is same with the state(6) to be set 00:19:20.407 [2024-12-06 13:58:19.788381] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833b50 is same with the state(6) to be set 00:19:20.407 [2024-12-06 13:58:19.788388] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833b50 is same with the state(6) to be set 00:19:20.407 [2024-12-06 13:58:19.788395] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833b50 is same with the state(6) to be set 00:19:20.407 [2024-12-06 13:58:19.788403] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833b50 is same with the state(6) to be set 00:19:20.407 [2024-12-06 13:58:19.788410] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833b50 is same with the state(6) to be set 00:19:20.407 [2024-12-06 13:58:19.788417] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833b50 is same with the state(6) to be set 00:19:20.407 [2024-12-06 13:58:19.788425] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833b50 is same with the state(6) to be set 00:19:20.407 [2024-12-06 13:58:19.788432] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833b50 is same with the state(6) to be set 00:19:20.407 [2024-12-06 13:58:19.788441] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833b50 is same with the state(6) to be set 00:19:20.407 [2024-12-06 13:58:19.788448] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833b50 is same with the state(6) to be set 00:19:20.407 [2024-12-06 13:58:19.788455] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833b50 is same with the state(6) to be set 00:19:20.407 [2024-12-06 13:58:19.788463] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833b50 is same with the state(6) to be set 00:19:20.407 [2024-12-06 13:58:19.788470] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833b50 is same with the state(6) to be set 00:19:20.407 [2024-12-06 13:58:19.788478] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833b50 is same with the state(6) to be set 00:19:20.407 [2024-12-06 13:58:19.788485] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833b50 is same with the state(6) to be set 00:19:20.407 [2024-12-06 13:58:19.788493] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833b50 is same with the state(6) to be set 00:19:20.407 [2024-12-06 13:58:19.788501] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833b50 is same with the state(6) to be set 00:19:20.407 [2024-12-06 13:58:19.788509] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833b50 is same with the state(6) to be set 00:19:20.407 [2024-12-06 13:58:19.788516] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833b50 is same with the state(6) to be set 00:19:20.407 [2024-12-06 13:58:19.788524] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833b50 is same with the state(6) to be set 00:19:20.407 [2024-12-06 13:58:19.788531] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833b50 is same with the state(6) to be set 00:19:20.407 [2024-12-06 13:58:19.788539] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833b50 is same with the state(6) to be set 00:19:20.407 [2024-12-06 13:58:19.788546] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833b50 is same with the state(6) to be set 00:19:20.407 [2024-12-06 13:58:19.788554] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833b50 is same with the state(6) to be set 00:19:20.407 [2024-12-06 13:58:19.788561] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833b50 is same with the state(6) to be set 00:19:20.407 [2024-12-06 13:58:19.788568] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833b50 is same with the state(6) to be set 00:19:20.407 [2024-12-06 13:58:19.788575] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833b50 is same with the state(6) to be set 00:19:20.407 [2024-12-06 13:58:19.788583] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833b50 is same with the state(6) to be set 00:19:20.407 [2024-12-06 13:58:19.788590] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833b50 is same with the state(6) to be set 00:19:20.407 [2024-12-06 13:58:19.788597] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833b50 is same with the state(6) to be set 00:19:20.407 [2024-12-06 13:58:19.788604] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833b50 is same with the state(6) to be set 00:19:20.407 [2024-12-06 13:58:19.788612] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833b50 is same with the state(6) to be set 00:19:20.407 [2024-12-06 13:58:19.788619] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833b50 is same with the state(6) to be set 00:19:20.407 [2024-12-06 13:58:19.788626] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833b50 is same with the state(6) to be set 00:19:20.407 [2024-12-06 13:58:19.788633] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833b50 is same with the state(6) to be set 00:19:20.407 [2024-12-06 13:58:19.788641] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833b50 is same with the state(6) to be set 00:19:20.407 [2024-12-06 13:58:19.788648] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833b50 is same with the state(6) to be set 00:19:20.407 [2024-12-06 13:58:19.788656] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833b50 is same with the state(6) to be set 00:19:20.407 [2024-12-06 13:58:19.788664] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833b50 is same with the state(6) to be set 00:19:20.407 [2024-12-06 13:58:19.788671] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833b50 is same with the state(6) to be set 00:19:20.407 [2024-12-06 13:58:19.788681] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833b50 is same with the state(6) to be set 00:19:20.407 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.407 [2024-12-06 13:58:19.788736] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:20.407 [2024-12-06 13:58:19.788751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.407 [2024-12-06 13:58:19.788763] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:20.407 [2024-12-06 13:58:19.788772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.407 [2024-12-06 13:58:19.788781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:20.407 [2024-12-06 13:58:19.788790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.407 [2024-12-06 13:58:19.788799] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f1070 is same with the state(6) to be set 00:19:20.407 [2024-12-06 13:58:19.788991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:33088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.407 [2024-12-06 13:58:19.789009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.407 [2024-12-06 13:58:19.789028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:105640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.408 [2024-12-06 13:58:19.789038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.408 [2024-12-06 13:58:19.789050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:77480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.408 [2024-12-06 13:58:19.789059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.408 [2024-12-06 13:58:19.789070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:58400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.408 [2024-12-06 13:58:19.789079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.408 [2024-12-06 13:58:19.789089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:111824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.408 [2024-12-06 13:58:19.789131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.408 [2024-12-06 13:58:19.789146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:122800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.408 [2024-12-06 13:58:19.789156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.408 [2024-12-06 13:58:19.789167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:9664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.408 [2024-12-06 13:58:19.789176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.408 [2024-12-06 13:58:19.789187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:36128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.408 [2024-12-06 13:58:19.789196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.408 [2024-12-06 13:58:19.789207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:60592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.408 [2024-12-06 13:58:19.789216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.408 [2024-12-06 13:58:19.789227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:39352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.408 [2024-12-06 13:58:19.789236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.408 [2024-12-06 13:58:19.789247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:84120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.408 [2024-12-06 13:58:19.789256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.408 [2024-12-06 13:58:19.789267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:51408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.408 [2024-12-06 13:58:19.789276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.408 [2024-12-06 13:58:19.789287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:48800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.408 [2024-12-06 13:58:19.789298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.408 [2024-12-06 13:58:19.789309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:127496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.408 [2024-12-06 13:58:19.789318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.408 [2024-12-06 13:58:19.789329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:42176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.408 [2024-12-06 13:58:19.789339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.408 [2024-12-06 13:58:19.789349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:79904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.408 [2024-12-06 13:58:19.789358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.408 [2024-12-06 13:58:19.789369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:99336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.408 [2024-12-06 13:58:19.789378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.408 [2024-12-06 13:58:19.789389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:120640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.408 [2024-12-06 13:58:19.789398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.408 [2024-12-06 13:58:19.789409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:103248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.408 [2024-12-06 13:58:19.789418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.408 [2024-12-06 13:58:19.789429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:19352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.408 [2024-12-06 13:58:19.789438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.408 [2024-12-06 13:58:19.789448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:108968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.408 [2024-12-06 13:58:19.789458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.408 [2024-12-06 13:58:19.789469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:36776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.408 [2024-12-06 13:58:19.789478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.408 [2024-12-06 13:58:19.789489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:108776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.408 [2024-12-06 13:58:19.789498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.408 [2024-12-06 13:58:19.789523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:94992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.408 [2024-12-06 13:58:19.789532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.408 [2024-12-06 13:58:19.789544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:24560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.408 [2024-12-06 13:58:19.789553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.408 [2024-12-06 13:58:19.789564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:125152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.408 [2024-12-06 13:58:19.789573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.408 [2024-12-06 13:58:19.789583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:25992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.408 [2024-12-06 13:58:19.789592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.408 [2024-12-06 13:58:19.789603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:101912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.408 [2024-12-06 13:58:19.789612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.408 [2024-12-06 13:58:19.789622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:117400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.408 [2024-12-06 13:58:19.789632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.408 [2024-12-06 13:58:19.789649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:2080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.408 [2024-12-06 13:58:19.789658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.408 [2024-12-06 13:58:19.789669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:83152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.408 [2024-12-06 13:58:19.789678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.408 [2024-12-06 13:58:19.789689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:121736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.408 [2024-12-06 13:58:19.789697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.408 [2024-12-06 13:58:19.789722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:28928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.409 [2024-12-06 13:58:19.789754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.409 [2024-12-06 13:58:19.789765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:72232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.409 [2024-12-06 13:58:19.789774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.409 [2024-12-06 13:58:19.789784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:19920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.409 [2024-12-06 13:58:19.789793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.409 [2024-12-06 13:58:19.789804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:76272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.409 [2024-12-06 13:58:19.789813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.409 [2024-12-06 13:58:19.789823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:44544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.409 [2024-12-06 13:58:19.789832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.409 [2024-12-06 13:58:19.789842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:74536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.409 [2024-12-06 13:58:19.789851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.409 [2024-12-06 13:58:19.789861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:33568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.409 [2024-12-06 13:58:19.789870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.409 [2024-12-06 13:58:19.789881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:50992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.409 [2024-12-06 13:58:19.789890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.409 [2024-12-06 13:58:19.789900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:114664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.409 [2024-12-06 13:58:19.789909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.409 [2024-12-06 13:58:19.789920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:20176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.409 [2024-12-06 13:58:19.789930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.409 [2024-12-06 13:58:19.789941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:111320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.409 [2024-12-06 13:58:19.789951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.409 [2024-12-06 13:58:19.789962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:110792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.409 [2024-12-06 13:58:19.789971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.409 [2024-12-06 13:58:19.789982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:104592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.409 [2024-12-06 13:58:19.789991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.409 [2024-12-06 13:58:19.790001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:16472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.409 [2024-12-06 13:58:19.790010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.409 [2024-12-06 13:58:19.790021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:91776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.409 [2024-12-06 13:58:19.790030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.409 [2024-12-06 13:58:19.790040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:74664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.409 [2024-12-06 13:58:19.790049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.409 [2024-12-06 13:58:19.790060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:77120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.409 [2024-12-06 13:58:19.790073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.409 [2024-12-06 13:58:19.790084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:98912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.409 [2024-12-06 13:58:19.790093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.409 [2024-12-06 13:58:19.790103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:86832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.409 [2024-12-06 13:58:19.790112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.409 [2024-12-06 13:58:19.790123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:117816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.409 [2024-12-06 13:58:19.790132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.409 [2024-12-06 13:58:19.790150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:42712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.409 [2024-12-06 13:58:19.790161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.409 [2024-12-06 13:58:19.790172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:120632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.409 [2024-12-06 13:58:19.790180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.409 [2024-12-06 13:58:19.790191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:51576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.409 [2024-12-06 13:58:19.790200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.409 [2024-12-06 13:58:19.790211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:54912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.409 [2024-12-06 13:58:19.790219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.409 [2024-12-06 13:58:19.790230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:53672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.409 [2024-12-06 13:58:19.790239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.409 [2024-12-06 13:58:19.790250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:92304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.409 [2024-12-06 13:58:19.790259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.409 [2024-12-06 13:58:19.790270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:83336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.409 [2024-12-06 13:58:19.790279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.409 [2024-12-06 13:58:19.790289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:105960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.409 [2024-12-06 13:58:19.790298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.409 [2024-12-06 13:58:19.790310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:19840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.409 [2024-12-06 13:58:19.790319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.409 [2024-12-06 13:58:19.790330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:30608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.409 [2024-12-06 13:58:19.790338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.410 [2024-12-06 13:58:19.790349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:48840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.410 [2024-12-06 13:58:19.790358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.410 [2024-12-06 13:58:19.790369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:64872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.410 [2024-12-06 13:58:19.790377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.410 [2024-12-06 13:58:19.790388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:105920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.410 [2024-12-06 13:58:19.790402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.410 [2024-12-06 13:58:19.790414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:60648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.410 [2024-12-06 13:58:19.790423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.410 [2024-12-06 13:58:19.790433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:39024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.410 [2024-12-06 13:58:19.790442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.410 [2024-12-06 13:58:19.790453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:122736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.410 [2024-12-06 13:58:19.790461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.410 [2024-12-06 13:58:19.790472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:80168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.410 [2024-12-06 13:58:19.790481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.410 [2024-12-06 13:58:19.790491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:51416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.410 [2024-12-06 13:58:19.790500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.410 [2024-12-06 13:58:19.790511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:20728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.410 [2024-12-06 13:58:19.790520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.410 [2024-12-06 13:58:19.790530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:101880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.410 [2024-12-06 13:58:19.790539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.410 [2024-12-06 13:58:19.790550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:121376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.410 [2024-12-06 13:58:19.790559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.410 [2024-12-06 13:58:19.790569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:13080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.410 [2024-12-06 13:58:19.790578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.410 [2024-12-06 13:58:19.790592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:86664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.410 [2024-12-06 13:58:19.790601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.410 [2024-12-06 13:58:19.790611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:66168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.410 [2024-12-06 13:58:19.790620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.410 [2024-12-06 13:58:19.790631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:126576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.410 [2024-12-06 13:58:19.790639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.410 [2024-12-06 13:58:19.790649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:60120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.410 [2024-12-06 13:58:19.790659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.410 [2024-12-06 13:58:19.790670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:90808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.410 [2024-12-06 13:58:19.790679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.410 [2024-12-06 13:58:19.790689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:87520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.410 [2024-12-06 13:58:19.790698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.410 [2024-12-06 13:58:19.790708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:1904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.410 [2024-12-06 13:58:19.790721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.410 [2024-12-06 13:58:19.790732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:95936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.410 [2024-12-06 13:58:19.790741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.410 [2024-12-06 13:58:19.790751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:12936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.410 [2024-12-06 13:58:19.790760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.410 [2024-12-06 13:58:19.790770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:94360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.410 [2024-12-06 13:58:19.790778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.410 [2024-12-06 13:58:19.790789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:51704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.410 [2024-12-06 13:58:19.790798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.410 [2024-12-06 13:58:19.790808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.410 [2024-12-06 13:58:19.790816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.410 [2024-12-06 13:58:19.790827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:77464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.410 [2024-12-06 13:58:19.790835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.410 [2024-12-06 13:58:19.790845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:55976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.410 [2024-12-06 13:58:19.790854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.410 [2024-12-06 13:58:19.790864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:48728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.410 [2024-12-06 13:58:19.790873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.410 [2024-12-06 13:58:19.790883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:109424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.410 [2024-12-06 13:58:19.790892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.410 [2024-12-06 13:58:19.790907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:110328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.410 [2024-12-06 13:58:19.790916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.410 [2024-12-06 13:58:19.790926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:2936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.411 [2024-12-06 13:58:19.790935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.411 [2024-12-06 13:58:19.790945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:24672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.411 [2024-12-06 13:58:19.790954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.411 [2024-12-06 13:58:19.790964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:96040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.411 [2024-12-06 13:58:19.790973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.411 [2024-12-06 13:58:19.790984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:100160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.411 [2024-12-06 13:58:19.790993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.411 [2024-12-06 13:58:19.791004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:23040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.411 [2024-12-06 13:58:19.791013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.411 [2024-12-06 13:58:19.791024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.411 [2024-12-06 13:58:19.791037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.411 [2024-12-06 13:58:19.791048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:72920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.411 [2024-12-06 13:58:19.791057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.411 [2024-12-06 13:58:19.791068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:13952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.411 [2024-12-06 13:58:19.791077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.411 [2024-12-06 13:58:19.791087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:60592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.411 [2024-12-06 13:58:19.791105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.411 [2024-12-06 13:58:19.791117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:73288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.411 [2024-12-06 13:58:19.791126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.411 [2024-12-06 13:58:19.791136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:47304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.411 [2024-12-06 13:58:19.791145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.411 [2024-12-06 13:58:19.791156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:96080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.411 [2024-12-06 13:58:19.791179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.411 [2024-12-06 13:58:19.791206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:35192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.411 [2024-12-06 13:58:19.791215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.411 [2024-12-06 13:58:19.791226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:99144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.411 [2024-12-06 13:58:19.791235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.411 [2024-12-06 13:58:19.791245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:28312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.411 [2024-12-06 13:58:19.791254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.411 [2024-12-06 13:58:19.791269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.411 [2024-12-06 13:58:19.791278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.411 [2024-12-06 13:58:19.791289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:50664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.411 [2024-12-06 13:58:19.791327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.411 [2024-12-06 13:58:19.791339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:108624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.411 [2024-12-06 13:58:19.791348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.411 [2024-12-06 13:58:19.791359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:6536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.411 [2024-12-06 13:58:19.791369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.411 [2024-12-06 13:58:19.791380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:65896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.411 [2024-12-06 13:58:19.791389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.411 [2024-12-06 13:58:19.791401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.411 [2024-12-06 13:58:19.791410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.411 [2024-12-06 13:58:19.791423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:31392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.411 [2024-12-06 13:58:19.791443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.411 [2024-12-06 13:58:19.791455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:113744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.411 [2024-12-06 13:58:19.791464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.411 [2024-12-06 13:58:19.791476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:39248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.411 [2024-12-06 13:58:19.791485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.411 [2024-12-06 13:58:19.791496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:123056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.411 [2024-12-06 13:58:19.791506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.411 [2024-12-06 13:58:19.791517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.411 [2024-12-06 13:58:19.791527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.411 [2024-12-06 13:58:19.791538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:110864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.411 [2024-12-06 13:58:19.791548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.411 [2024-12-06 13:58:19.791559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:65576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.411 [2024-12-06 13:58:19.791568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.411 [2024-12-06 13:58:19.791580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:67464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.411 [2024-12-06 13:58:19.791594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.411 [2024-12-06 13:58:19.791605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:118272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.411 [2024-12-06 13:58:19.791629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.411 [2024-12-06 13:58:19.791655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.411 [2024-12-06 13:58:19.791679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.411 [2024-12-06 13:58:19.791710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:84448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.412 [2024-12-06 13:58:19.791719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.412 [2024-12-06 13:58:19.791730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:57024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.412 [2024-12-06 13:58:19.791739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.412 [2024-12-06 13:58:19.791749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:125752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.412 [2024-12-06 13:58:19.791758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.412 [2024-12-06 13:58:19.791768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:36600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.412 [2024-12-06 13:58:19.791777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.412 [2024-12-06 13:58:19.791787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:116264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.412 [2024-12-06 13:58:19.791796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.412 [2024-12-06 13:58:19.791806] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5f100 is same with the state(6) to be set 00:19:20.412 [2024-12-06 13:58:19.791818] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.412 [2024-12-06 13:58:19.791825] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.412 [2024-12-06 13:58:19.791838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22856 len:8 PRP1 0x0 PRP2 0x0 00:19:20.412 [2024-12-06 13:58:19.791846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.412 [2024-12-06 13:58:19.792156] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:19:20.412 [2024-12-06 13:58:19.792182] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9f1070 (9): Bad file descriptor 00:19:20.412 [2024-12-06 13:58:19.792293] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:20.412 [2024-12-06 13:58:19.792315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f1070 with addr=10.0.0.3, port=4420 00:19:20.412 [2024-12-06 13:58:19.792326] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f1070 is same with the state(6) to be set 00:19:20.412 [2024-12-06 13:58:19.792344] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9f1070 (9): Bad file descriptor 00:19:20.412 [2024-12-06 13:58:19.792359] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:19:20.412 [2024-12-06 13:58:19.792368] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:19:20.412 [2024-12-06 13:58:19.792378] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:19:20.412 [2024-12-06 13:58:19.792388] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:19:20.412 [2024-12-06 13:58:19.792397] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:19:20.412 13:58:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 82360 00:19:22.288 9812.00 IOPS, 38.33 MiB/s [2024-12-06T13:58:21.951Z] 6541.33 IOPS, 25.55 MiB/s [2024-12-06T13:58:21.951Z] [2024-12-06 13:58:21.792555] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:22.547 [2024-12-06 13:58:21.792776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f1070 with addr=10.0.0.3, port=4420 00:19:22.547 [2024-12-06 13:58:21.792923] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f1070 is same with the state(6) to be set 00:19:22.547 [2024-12-06 13:58:21.793201] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9f1070 (9): Bad file descriptor 00:19:22.547 [2024-12-06 13:58:21.793360] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:19:22.547 [2024-12-06 13:58:21.793429] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:19:22.547 [2024-12-06 13:58:21.793656] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:19:22.547 [2024-12-06 13:58:21.793709] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:19:22.547 [2024-12-06 13:58:21.793848] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:19:24.423 4906.00 IOPS, 19.16 MiB/s [2024-12-06T13:58:23.827Z] 3924.80 IOPS, 15.33 MiB/s [2024-12-06T13:58:23.827Z] [2024-12-06 13:58:23.794019] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:24.423 [2024-12-06 13:58:23.794252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f1070 with addr=10.0.0.3, port=4420 00:19:24.423 [2024-12-06 13:58:23.794394] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f1070 is same with the state(6) to be set 00:19:24.423 [2024-12-06 13:58:23.794519] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9f1070 (9): Bad file descriptor 00:19:24.423 [2024-12-06 13:58:23.794540] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:19:24.423 [2024-12-06 13:58:23.794550] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:19:24.423 [2024-12-06 13:58:23.794560] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:19:24.423 [2024-12-06 13:58:23.794570] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:19:24.423 [2024-12-06 13:58:23.794580] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:19:26.298 3270.67 IOPS, 12.78 MiB/s [2024-12-06T13:58:25.960Z] 2803.43 IOPS, 10.95 MiB/s [2024-12-06T13:58:25.960Z] [2024-12-06 13:58:25.794639] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:19:26.556 [2024-12-06 13:58:25.794680] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:19:26.556 [2024-12-06 13:58:25.794707] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:19:26.556 [2024-12-06 13:58:25.794717] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] already in failed state 00:19:26.556 [2024-12-06 13:58:25.794728] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:19:27.494 2453.00 IOPS, 9.58 MiB/s 00:19:27.494 Latency(us) 00:19:27.494 [2024-12-06T13:58:26.898Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:27.494 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:19:27.494 NVMe0n1 : 8.15 2409.32 9.41 15.72 0.00 52721.16 7030.23 7015926.69 00:19:27.494 [2024-12-06T13:58:26.898Z] =================================================================================================================== 00:19:27.494 [2024-12-06T13:58:26.898Z] Total : 2409.32 9.41 15.72 0.00 52721.16 7030.23 7015926.69 00:19:27.494 { 00:19:27.494 "results": [ 00:19:27.494 { 00:19:27.494 "job": "NVMe0n1", 00:19:27.494 "core_mask": "0x4", 00:19:27.494 "workload": "randread", 00:19:27.494 "status": "finished", 00:19:27.494 "queue_depth": 128, 00:19:27.494 "io_size": 4096, 00:19:27.494 "runtime": 8.145021, 00:19:27.494 "iops": 2409.324665952365, 00:19:27.494 "mibps": 9.411424476376427, 00:19:27.494 "io_failed": 128, 00:19:27.494 "io_timeout": 0, 00:19:27.494 "avg_latency_us": 52721.15823999411, 00:19:27.494 "min_latency_us": 7030.225454545454, 00:19:27.494 "max_latency_us": 7015926.69090909 00:19:27.494 } 00:19:27.494 ], 00:19:27.494 "core_count": 1 00:19:27.494 } 00:19:27.494 13:58:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:27.494 Attaching 5 probes... 00:19:27.494 1377.188698: reset bdev controller NVMe0 00:19:27.494 1377.268089: reconnect bdev controller NVMe0 00:19:27.494 3377.505987: reconnect delay bdev controller NVMe0 00:19:27.494 3377.521079: reconnect bdev controller NVMe0 00:19:27.494 5378.988563: reconnect delay bdev controller NVMe0 00:19:27.494 5379.002851: reconnect bdev controller NVMe0 00:19:27.494 7379.659463: reconnect delay bdev controller NVMe0 00:19:27.494 7379.675555: reconnect bdev controller NVMe0 00:19:27.494 13:58:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:19:27.494 13:58:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:19:27.494 13:58:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 82313 00:19:27.494 13:58:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:27.494 13:58:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 82310 00:19:27.494 13:58:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 82310 ']' 00:19:27.494 13:58:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 82310 00:19:27.494 13:58:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:19:27.494 13:58:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:27.494 13:58:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82310 00:19:27.494 killing process with pid 82310 00:19:27.494 Received shutdown signal, test time was about 8.215177 seconds 00:19:27.494 00:19:27.494 Latency(us) 00:19:27.494 [2024-12-06T13:58:26.898Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:27.494 [2024-12-06T13:58:26.898Z] =================================================================================================================== 00:19:27.494 [2024-12-06T13:58:26.898Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:27.494 13:58:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:27.494 13:58:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:27.494 13:58:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82310' 00:19:27.494 13:58:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 82310 00:19:27.494 13:58:26 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 82310 00:19:27.754 13:58:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:28.014 13:58:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:19:28.014 13:58:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:19:28.014 13:58:27 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:28.014 13:58:27 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # sync 00:19:28.014 13:58:27 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:28.014 13:58:27 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set +e 00:19:28.014 13:58:27 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:28.014 13:58:27 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:28.014 rmmod nvme_tcp 00:19:28.014 rmmod nvme_fabrics 00:19:28.014 rmmod nvme_keyring 00:19:28.014 13:58:27 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:28.014 13:58:27 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@128 -- # set -e 00:19:28.014 13:58:27 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@129 -- # return 0 00:19:28.014 13:58:27 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@517 -- # '[' -n 81884 ']' 00:19:28.014 13:58:27 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@518 -- # killprocess 81884 00:19:28.014 13:58:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 81884 ']' 00:19:28.014 13:58:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 81884 00:19:28.014 13:58:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:19:28.014 13:58:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:28.015 13:58:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81884 00:19:28.015 killing process with pid 81884 00:19:28.015 13:58:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:28.015 13:58:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:28.015 13:58:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81884' 00:19:28.015 13:58:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 81884 00:19:28.015 13:58:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 81884 00:19:28.274 13:58:27 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:28.274 13:58:27 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:28.274 13:58:27 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:28.274 13:58:27 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@297 -- # iptr 00:19:28.274 13:58:27 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-save 00:19:28.274 13:58:27 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:28.274 13:58:27 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:19:28.274 13:58:27 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:28.274 13:58:27 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:28.274 13:58:27 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:28.274 13:58:27 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:28.274 13:58:27 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:28.274 13:58:27 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:28.274 13:58:27 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:28.274 13:58:27 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:28.274 13:58:27 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:28.274 13:58:27 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:28.274 13:58:27 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:28.534 13:58:27 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:28.534 13:58:27 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:28.534 13:58:27 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:28.534 13:58:27 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:28.534 13:58:27 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:28.534 13:58:27 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:28.534 13:58:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:28.534 13:58:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:28.534 13:58:27 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@300 -- # return 0 00:19:28.534 00:19:28.534 real 0m46.018s 00:19:28.534 user 2m14.781s 00:19:28.534 sys 0m5.216s 00:19:28.534 13:58:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:28.534 13:58:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:28.534 ************************************ 00:19:28.534 END TEST nvmf_timeout 00:19:28.534 ************************************ 00:19:28.534 13:58:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:19:28.534 13:58:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:19:28.534 00:19:28.534 real 5m3.521s 00:19:28.534 user 13m7.449s 00:19:28.534 sys 1m9.068s 00:19:28.534 13:58:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:28.534 13:58:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:28.534 ************************************ 00:19:28.534 END TEST nvmf_host 00:19:28.534 ************************************ 00:19:28.534 13:58:27 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:19:28.534 13:58:27 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 1 -eq 0 ]] 00:19:28.534 ************************************ 00:19:28.534 END TEST nvmf_tcp 00:19:28.534 ************************************ 00:19:28.534 00:19:28.534 real 12m35.851s 00:19:28.534 user 30m10.314s 00:19:28.534 sys 3m8.674s 00:19:28.534 13:58:27 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:28.534 13:58:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:28.795 13:58:27 -- spdk/autotest.sh@285 -- # [[ 1 -eq 0 ]] 00:19:28.795 13:58:27 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:19:28.795 13:58:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:28.795 13:58:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:28.795 13:58:27 -- common/autotest_common.sh@10 -- # set +x 00:19:28.795 ************************************ 00:19:28.795 START TEST nvmf_dif 00:19:28.795 ************************************ 00:19:28.795 13:58:27 nvmf_dif -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:19:28.795 * Looking for test storage... 00:19:28.795 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:28.795 13:58:28 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:28.795 13:58:28 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version 00:19:28.795 13:58:28 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:28.795 13:58:28 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:28.795 13:58:28 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:28.795 13:58:28 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:28.795 13:58:28 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:28.795 13:58:28 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:19:28.795 13:58:28 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:19:28.795 13:58:28 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:19:28.795 13:58:28 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:19:28.795 13:58:28 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:19:28.795 13:58:28 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:19:28.795 13:58:28 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:19:28.795 13:58:28 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:28.795 13:58:28 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:19:28.795 13:58:28 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:19:28.795 13:58:28 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:28.795 13:58:28 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:28.795 13:58:28 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:19:28.795 13:58:28 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:19:28.795 13:58:28 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:28.795 13:58:28 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:19:28.795 13:58:28 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:19:28.795 13:58:28 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:19:28.795 13:58:28 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:19:28.795 13:58:28 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:28.795 13:58:28 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:19:28.795 13:58:28 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:19:28.795 13:58:28 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:28.795 13:58:28 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:28.795 13:58:28 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:19:28.795 13:58:28 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:28.795 13:58:28 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:28.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:28.795 --rc genhtml_branch_coverage=1 00:19:28.795 --rc genhtml_function_coverage=1 00:19:28.795 --rc genhtml_legend=1 00:19:28.795 --rc geninfo_all_blocks=1 00:19:28.795 --rc geninfo_unexecuted_blocks=1 00:19:28.795 00:19:28.795 ' 00:19:28.795 13:58:28 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:28.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:28.795 --rc genhtml_branch_coverage=1 00:19:28.795 --rc genhtml_function_coverage=1 00:19:28.795 --rc genhtml_legend=1 00:19:28.795 --rc geninfo_all_blocks=1 00:19:28.795 --rc geninfo_unexecuted_blocks=1 00:19:28.795 00:19:28.795 ' 00:19:28.795 13:58:28 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:28.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:28.795 --rc genhtml_branch_coverage=1 00:19:28.795 --rc genhtml_function_coverage=1 00:19:28.795 --rc genhtml_legend=1 00:19:28.795 --rc geninfo_all_blocks=1 00:19:28.795 --rc geninfo_unexecuted_blocks=1 00:19:28.795 00:19:28.795 ' 00:19:28.795 13:58:28 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:28.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:28.795 --rc genhtml_branch_coverage=1 00:19:28.795 --rc genhtml_function_coverage=1 00:19:28.795 --rc genhtml_legend=1 00:19:28.795 --rc geninfo_all_blocks=1 00:19:28.795 --rc geninfo_unexecuted_blocks=1 00:19:28.795 00:19:28.795 ' 00:19:28.795 13:58:28 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:28.795 13:58:28 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:19:28.795 13:58:28 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:28.795 13:58:28 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:28.795 13:58:28 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:28.795 13:58:28 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:28.795 13:58:28 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:28.795 13:58:28 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:28.795 13:58:28 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:28.795 13:58:28 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:28.795 13:58:28 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:28.796 13:58:28 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:28.796 13:58:28 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:19:28.796 13:58:28 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=cfa2def7-c8af-457f-82a0-b312efdea7f4 00:19:28.796 13:58:28 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:28.796 13:58:28 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:28.796 13:58:28 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:28.796 13:58:28 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:28.796 13:58:28 nvmf_dif -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:28.796 13:58:28 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:19:28.796 13:58:28 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:28.796 13:58:28 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:28.796 13:58:28 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:28.796 13:58:28 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.796 13:58:28 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.796 13:58:28 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.796 13:58:28 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:19:28.796 13:58:28 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.796 13:58:28 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:19:28.796 13:58:28 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:28.796 13:58:28 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:28.796 13:58:28 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:28.796 13:58:28 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:28.796 13:58:28 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:28.796 13:58:28 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:28.796 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:28.796 13:58:28 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:28.796 13:58:28 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:28.796 13:58:28 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:28.796 13:58:28 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:19:28.796 13:58:28 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:19:28.796 13:58:28 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:19:28.796 13:58:28 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:19:28.796 13:58:28 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:19:28.796 13:58:28 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:28.796 13:58:28 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:28.796 13:58:28 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:28.796 13:58:28 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:28.796 13:58:28 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:28.796 13:58:28 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:28.796 13:58:28 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:19:28.796 13:58:28 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:28.796 13:58:28 nvmf_dif -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:28.796 13:58:28 nvmf_dif -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:28.796 13:58:28 nvmf_dif -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:28.796 13:58:28 nvmf_dif -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:28.796 13:58:28 nvmf_dif -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:28.796 13:58:28 nvmf_dif -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:28.796 13:58:28 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:28.796 13:58:28 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:28.796 13:58:28 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:28.796 13:58:28 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:28.796 13:58:28 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:28.796 13:58:28 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:28.796 13:58:28 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:28.796 13:58:28 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:28.796 13:58:28 nvmf_dif -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:28.796 13:58:28 nvmf_dif -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:28.796 13:58:28 nvmf_dif -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:28.796 13:58:28 nvmf_dif -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:28.796 13:58:28 nvmf_dif -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:28.796 13:58:28 nvmf_dif -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:28.796 13:58:28 nvmf_dif -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:28.796 13:58:28 nvmf_dif -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:28.796 13:58:28 nvmf_dif -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:29.054 Cannot find device "nvmf_init_br" 00:19:29.054 13:58:28 nvmf_dif -- nvmf/common.sh@162 -- # true 00:19:29.054 13:58:28 nvmf_dif -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:29.054 Cannot find device "nvmf_init_br2" 00:19:29.054 13:58:28 nvmf_dif -- nvmf/common.sh@163 -- # true 00:19:29.054 13:58:28 nvmf_dif -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:29.054 Cannot find device "nvmf_tgt_br" 00:19:29.054 13:58:28 nvmf_dif -- nvmf/common.sh@164 -- # true 00:19:29.054 13:58:28 nvmf_dif -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:29.054 Cannot find device "nvmf_tgt_br2" 00:19:29.054 13:58:28 nvmf_dif -- nvmf/common.sh@165 -- # true 00:19:29.054 13:58:28 nvmf_dif -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:29.054 Cannot find device "nvmf_init_br" 00:19:29.054 13:58:28 nvmf_dif -- nvmf/common.sh@166 -- # true 00:19:29.054 13:58:28 nvmf_dif -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:29.054 Cannot find device "nvmf_init_br2" 00:19:29.054 13:58:28 nvmf_dif -- nvmf/common.sh@167 -- # true 00:19:29.054 13:58:28 nvmf_dif -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:29.054 Cannot find device "nvmf_tgt_br" 00:19:29.054 13:58:28 nvmf_dif -- nvmf/common.sh@168 -- # true 00:19:29.054 13:58:28 nvmf_dif -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:29.054 Cannot find device "nvmf_tgt_br2" 00:19:29.054 13:58:28 nvmf_dif -- nvmf/common.sh@169 -- # true 00:19:29.054 13:58:28 nvmf_dif -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:29.054 Cannot find device "nvmf_br" 00:19:29.054 13:58:28 nvmf_dif -- nvmf/common.sh@170 -- # true 00:19:29.054 13:58:28 nvmf_dif -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:29.054 Cannot find device "nvmf_init_if" 00:19:29.054 13:58:28 nvmf_dif -- nvmf/common.sh@171 -- # true 00:19:29.054 13:58:28 nvmf_dif -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:29.054 Cannot find device "nvmf_init_if2" 00:19:29.054 13:58:28 nvmf_dif -- nvmf/common.sh@172 -- # true 00:19:29.054 13:58:28 nvmf_dif -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:29.054 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:29.054 13:58:28 nvmf_dif -- nvmf/common.sh@173 -- # true 00:19:29.054 13:58:28 nvmf_dif -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:29.054 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:29.054 13:58:28 nvmf_dif -- nvmf/common.sh@174 -- # true 00:19:29.054 13:58:28 nvmf_dif -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:29.054 13:58:28 nvmf_dif -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:29.054 13:58:28 nvmf_dif -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:29.055 13:58:28 nvmf_dif -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:29.055 13:58:28 nvmf_dif -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:29.055 13:58:28 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:29.055 13:58:28 nvmf_dif -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:29.055 13:58:28 nvmf_dif -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:29.055 13:58:28 nvmf_dif -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:29.055 13:58:28 nvmf_dif -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:29.055 13:58:28 nvmf_dif -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:29.055 13:58:28 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:29.055 13:58:28 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:29.055 13:58:28 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:29.055 13:58:28 nvmf_dif -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:29.055 13:58:28 nvmf_dif -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:29.055 13:58:28 nvmf_dif -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:29.055 13:58:28 nvmf_dif -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:29.055 13:58:28 nvmf_dif -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:29.055 13:58:28 nvmf_dif -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:29.055 13:58:28 nvmf_dif -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:29.055 13:58:28 nvmf_dif -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:29.055 13:58:28 nvmf_dif -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:29.313 13:58:28 nvmf_dif -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:29.313 13:58:28 nvmf_dif -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:29.313 13:58:28 nvmf_dif -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:29.313 13:58:28 nvmf_dif -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:29.313 13:58:28 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:29.313 13:58:28 nvmf_dif -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:29.313 13:58:28 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:29.313 13:58:28 nvmf_dif -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:29.313 13:58:28 nvmf_dif -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:29.313 13:58:28 nvmf_dif -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:29.313 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:29.313 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.090 ms 00:19:29.313 00:19:29.313 --- 10.0.0.3 ping statistics --- 00:19:29.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:29.313 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:19:29.313 13:58:28 nvmf_dif -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:29.313 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:29.313 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.039 ms 00:19:29.313 00:19:29.313 --- 10.0.0.4 ping statistics --- 00:19:29.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:29.313 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:19:29.313 13:58:28 nvmf_dif -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:29.313 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:29.313 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:19:29.313 00:19:29.313 --- 10.0.0.1 ping statistics --- 00:19:29.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:29.313 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:19:29.313 13:58:28 nvmf_dif -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:29.313 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:29.313 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.121 ms 00:19:29.313 00:19:29.313 --- 10.0.0.2 ping statistics --- 00:19:29.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:29.313 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:19:29.313 13:58:28 nvmf_dif -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:29.313 13:58:28 nvmf_dif -- nvmf/common.sh@461 -- # return 0 00:19:29.313 13:58:28 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:19:29.313 13:58:28 nvmf_dif -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:29.571 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:29.571 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:29.571 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:29.571 13:58:28 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:29.571 13:58:28 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:29.571 13:58:28 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:29.571 13:58:28 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:29.571 13:58:28 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:29.571 13:58:28 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:29.571 13:58:28 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:19:29.571 13:58:28 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:19:29.571 13:58:28 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:29.571 13:58:28 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:29.571 13:58:28 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:29.571 13:58:28 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:29.571 13:58:28 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=82853 00:19:29.571 13:58:28 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 82853 00:19:29.571 13:58:28 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 82853 ']' 00:19:29.571 13:58:28 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:29.571 13:58:28 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:29.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:29.571 13:58:28 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:29.571 13:58:28 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:29.571 13:58:28 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:29.830 [2024-12-06 13:58:28.989550] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:19:29.830 [2024-12-06 13:58:28.989631] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:29.830 [2024-12-06 13:58:29.133827] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:29.830 [2024-12-06 13:58:29.186910] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:29.830 [2024-12-06 13:58:29.186973] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:29.830 [2024-12-06 13:58:29.186988] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:29.830 [2024-12-06 13:58:29.186998] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:29.830 [2024-12-06 13:58:29.187007] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:29.830 [2024-12-06 13:58:29.187505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:30.088 [2024-12-06 13:58:29.246245] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:30.088 13:58:29 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:30.088 13:58:29 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:19:30.088 13:58:29 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:30.088 13:58:29 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:30.088 13:58:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:30.088 13:58:29 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:30.088 13:58:29 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:19:30.088 13:58:29 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:19:30.088 13:58:29 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.089 13:58:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:30.089 [2024-12-06 13:58:29.368248] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:30.089 13:58:29 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.089 13:58:29 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:19:30.089 13:58:29 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:30.089 13:58:29 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:30.089 13:58:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:30.089 ************************************ 00:19:30.089 START TEST fio_dif_1_default 00:19:30.089 ************************************ 00:19:30.089 13:58:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:19:30.089 13:58:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:19:30.089 13:58:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:19:30.089 13:58:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:19:30.089 13:58:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:19:30.089 13:58:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:19:30.089 13:58:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:19:30.089 13:58:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.089 13:58:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:30.089 bdev_null0 00:19:30.089 13:58:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.089 13:58:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:30.089 13:58:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.089 13:58:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:30.089 13:58:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.089 13:58:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:30.089 13:58:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.089 13:58:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:30.089 13:58:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.089 13:58:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:19:30.089 13:58:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.089 13:58:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:30.089 [2024-12-06 13:58:29.412437] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:30.089 13:58:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.089 13:58:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:19:30.089 13:58:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:19:30.089 13:58:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:19:30.089 13:58:29 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:19:30.089 13:58:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:30.089 13:58:29 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:19:30.089 13:58:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:19:30.089 13:58:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:30.089 13:58:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:19:30.089 13:58:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:30.089 13:58:29 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:30.089 13:58:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:19:30.089 13:58:29 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:30.089 { 00:19:30.089 "params": { 00:19:30.089 "name": "Nvme$subsystem", 00:19:30.089 "trtype": "$TEST_TRANSPORT", 00:19:30.089 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:30.089 "adrfam": "ipv4", 00:19:30.089 "trsvcid": "$NVMF_PORT", 00:19:30.089 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:30.089 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:30.089 "hdgst": ${hdgst:-false}, 00:19:30.089 "ddgst": ${ddgst:-false} 00:19:30.089 }, 00:19:30.089 "method": "bdev_nvme_attach_controller" 00:19:30.089 } 00:19:30.089 EOF 00:19:30.089 )") 00:19:30.089 13:58:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:30.089 13:58:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:30.089 13:58:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:30.089 13:58:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:19:30.089 13:58:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:30.089 13:58:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:30.089 13:58:29 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:19:30.089 13:58:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:19:30.089 13:58:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:19:30.089 13:58:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:30.089 13:58:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:30.089 13:58:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:19:30.089 13:58:29 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:19:30.089 13:58:29 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:19:30.089 13:58:29 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:19:30.089 "params": { 00:19:30.089 "name": "Nvme0", 00:19:30.089 "trtype": "tcp", 00:19:30.089 "traddr": "10.0.0.3", 00:19:30.089 "adrfam": "ipv4", 00:19:30.089 "trsvcid": "4420", 00:19:30.089 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:30.089 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:30.089 "hdgst": false, 00:19:30.089 "ddgst": false 00:19:30.089 }, 00:19:30.089 "method": "bdev_nvme_attach_controller" 00:19:30.089 }' 00:19:30.089 13:58:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:19:30.089 13:58:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:19:30.089 13:58:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:30.089 13:58:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:30.089 13:58:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:19:30.089 13:58:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:30.089 13:58:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:19:30.089 13:58:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:19:30.089 13:58:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:30.089 13:58:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:30.348 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:19:30.348 fio-3.35 00:19:30.348 Starting 1 thread 00:19:42.594 00:19:42.594 filename0: (groupid=0, jobs=1): err= 0: pid=82912: Fri Dec 6 13:58:40 2024 00:19:42.594 read: IOPS=9945, BW=38.8MiB/s (40.7MB/s)(389MiB/10001msec) 00:19:42.594 slat (usec): min=5, max=260, avg= 7.73, stdev= 3.37 00:19:42.594 clat (usec): min=319, max=3375, avg=378.97, stdev=45.30 00:19:42.594 lat (usec): min=325, max=3386, avg=386.69, stdev=46.06 00:19:42.594 clat percentiles (usec): 00:19:42.594 | 1.00th=[ 326], 5.00th=[ 330], 10.00th=[ 338], 20.00th=[ 347], 00:19:42.594 | 30.00th=[ 359], 40.00th=[ 367], 50.00th=[ 375], 60.00th=[ 383], 00:19:42.594 | 70.00th=[ 392], 80.00th=[ 404], 90.00th=[ 424], 95.00th=[ 453], 00:19:42.594 | 99.00th=[ 519], 99.50th=[ 545], 99.90th=[ 619], 99.95th=[ 660], 00:19:42.594 | 99.99th=[ 906] 00:19:42.594 bw ( KiB/s): min=37333, max=40768, per=100.00%, avg=39787.21, stdev=751.64, samples=19 00:19:42.594 iops : min= 9333, max=10192, avg=9946.79, stdev=187.96, samples=19 00:19:42.594 lat (usec) : 500=98.48%, 750=1.51%, 1000=0.01% 00:19:42.594 lat (msec) : 4=0.01% 00:19:42.594 cpu : usr=85.53%, sys=12.60%, ctx=42, majf=0, minf=9 00:19:42.594 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:42.594 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:42.594 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:42.594 issued rwts: total=99464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:42.594 latency : target=0, window=0, percentile=100.00%, depth=4 00:19:42.594 00:19:42.594 Run status group 0 (all jobs): 00:19:42.594 READ: bw=38.8MiB/s (40.7MB/s), 38.8MiB/s-38.8MiB/s (40.7MB/s-40.7MB/s), io=389MiB (407MB), run=10001-10001msec 00:19:42.594 13:58:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:19:42.594 13:58:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:19:42.594 13:58:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:19:42.594 13:58:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:42.594 13:58:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:19:42.594 13:58:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:42.594 13:58:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.594 13:58:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:42.594 13:58:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.594 13:58:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:42.594 13:58:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.594 13:58:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:42.594 13:58:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.594 00:19:42.594 real 0m11.017s 00:19:42.594 user 0m9.204s 00:19:42.594 sys 0m1.536s 00:19:42.594 13:58:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:42.594 13:58:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:42.594 ************************************ 00:19:42.594 END TEST fio_dif_1_default 00:19:42.594 ************************************ 00:19:42.594 13:58:40 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:19:42.594 13:58:40 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:42.594 13:58:40 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:42.594 13:58:40 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:42.594 ************************************ 00:19:42.594 START TEST fio_dif_1_multi_subsystems 00:19:42.594 ************************************ 00:19:42.594 13:58:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:19:42.594 13:58:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:19:42.594 13:58:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:19:42.594 13:58:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:19:42.594 13:58:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:19:42.594 13:58:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:19:42.594 13:58:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:19:42.594 13:58:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:19:42.594 13:58:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.594 13:58:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:42.594 bdev_null0 00:19:42.594 13:58:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.594 13:58:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:42.594 13:58:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.594 13:58:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:42.594 13:58:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.594 13:58:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:42.594 13:58:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.594 13:58:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:42.594 13:58:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.594 13:58:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:19:42.594 13:58:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.594 13:58:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:42.594 [2024-12-06 13:58:40.480583] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:42.594 13:58:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.594 13:58:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:19:42.594 13:58:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:19:42.594 13:58:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:19:42.594 13:58:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:19:42.594 13:58:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.594 13:58:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:42.594 bdev_null1 00:19:42.594 13:58:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.594 13:58:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:19:42.594 13:58:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.594 13:58:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:42.594 13:58:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.594 13:58:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:19:42.594 13:58:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.594 13:58:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:42.594 13:58:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.594 13:58:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:42.594 13:58:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.594 13:58:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:42.594 13:58:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.594 13:58:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:19:42.594 13:58:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:19:42.594 13:58:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:19:42.594 13:58:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:19:42.594 13:58:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:19:42.594 13:58:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:42.594 13:58:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:42.594 13:58:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:42.594 { 00:19:42.594 "params": { 00:19:42.594 "name": "Nvme$subsystem", 00:19:42.594 "trtype": "$TEST_TRANSPORT", 00:19:42.595 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:42.595 "adrfam": "ipv4", 00:19:42.595 "trsvcid": "$NVMF_PORT", 00:19:42.595 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:42.595 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:42.595 "hdgst": ${hdgst:-false}, 00:19:42.595 "ddgst": ${ddgst:-false} 00:19:42.595 }, 00:19:42.595 "method": "bdev_nvme_attach_controller" 00:19:42.595 } 00:19:42.595 EOF 00:19:42.595 )") 00:19:42.595 13:58:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:19:42.595 13:58:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:42.595 13:58:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:19:42.595 13:58:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:42.595 13:58:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:19:42.595 13:58:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:42.595 13:58:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:42.595 13:58:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:42.595 13:58:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:19:42.595 13:58:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:42.595 13:58:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:19:42.595 13:58:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:42.595 13:58:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:42.595 13:58:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:19:42.595 13:58:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:19:42.595 13:58:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:19:42.595 13:58:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:19:42.595 13:58:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:42.595 13:58:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:42.595 13:58:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:42.595 { 00:19:42.595 "params": { 00:19:42.595 "name": "Nvme$subsystem", 00:19:42.595 "trtype": "$TEST_TRANSPORT", 00:19:42.595 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:42.595 "adrfam": "ipv4", 00:19:42.595 "trsvcid": "$NVMF_PORT", 00:19:42.595 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:42.595 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:42.595 "hdgst": ${hdgst:-false}, 00:19:42.595 "ddgst": ${ddgst:-false} 00:19:42.595 }, 00:19:42.595 "method": "bdev_nvme_attach_controller" 00:19:42.595 } 00:19:42.595 EOF 00:19:42.595 )") 00:19:42.595 13:58:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:19:42.595 13:58:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:19:42.595 13:58:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:19:42.595 13:58:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:19:42.595 13:58:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:19:42.595 13:58:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:19:42.595 "params": { 00:19:42.595 "name": "Nvme0", 00:19:42.595 "trtype": "tcp", 00:19:42.595 "traddr": "10.0.0.3", 00:19:42.595 "adrfam": "ipv4", 00:19:42.595 "trsvcid": "4420", 00:19:42.595 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:42.595 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:42.595 "hdgst": false, 00:19:42.595 "ddgst": false 00:19:42.595 }, 00:19:42.595 "method": "bdev_nvme_attach_controller" 00:19:42.595 },{ 00:19:42.595 "params": { 00:19:42.595 "name": "Nvme1", 00:19:42.595 "trtype": "tcp", 00:19:42.595 "traddr": "10.0.0.3", 00:19:42.595 "adrfam": "ipv4", 00:19:42.595 "trsvcid": "4420", 00:19:42.595 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:42.595 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:42.595 "hdgst": false, 00:19:42.595 "ddgst": false 00:19:42.595 }, 00:19:42.595 "method": "bdev_nvme_attach_controller" 00:19:42.595 }' 00:19:42.595 13:58:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:19:42.595 13:58:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:19:42.595 13:58:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:42.595 13:58:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:42.595 13:58:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:19:42.595 13:58:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:42.595 13:58:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:19:42.595 13:58:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:19:42.595 13:58:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:42.595 13:58:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:42.595 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:19:42.595 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:19:42.595 fio-3.35 00:19:42.595 Starting 2 threads 00:19:52.575 00:19:52.575 filename0: (groupid=0, jobs=1): err= 0: pid=83077: Fri Dec 6 13:58:51 2024 00:19:52.575 read: IOPS=5079, BW=19.8MiB/s (20.8MB/s)(198MiB/10001msec) 00:19:52.575 slat (nsec): min=6030, max=88486, avg=15412.54, stdev=7702.12 00:19:52.575 clat (usec): min=523, max=1312, avg=745.73, stdev=82.80 00:19:52.575 lat (usec): min=549, max=1336, avg=761.14, stdev=85.89 00:19:52.575 clat percentiles (usec): 00:19:52.575 | 1.00th=[ 603], 5.00th=[ 635], 10.00th=[ 660], 20.00th=[ 685], 00:19:52.575 | 30.00th=[ 701], 40.00th=[ 717], 50.00th=[ 734], 60.00th=[ 750], 00:19:52.575 | 70.00th=[ 775], 80.00th=[ 807], 90.00th=[ 857], 95.00th=[ 906], 00:19:52.575 | 99.00th=[ 1004], 99.50th=[ 1037], 99.90th=[ 1123], 99.95th=[ 1172], 00:19:52.575 | 99.99th=[ 1254] 00:19:52.575 bw ( KiB/s): min=18336, max=21696, per=50.00%, avg=20321.68, stdev=1211.48, samples=19 00:19:52.575 iops : min= 4584, max= 5424, avg=5080.42, stdev=302.87, samples=19 00:19:52.575 lat (usec) : 750=60.60%, 1000=38.36% 00:19:52.575 lat (msec) : 2=1.04% 00:19:52.575 cpu : usr=91.86%, sys=6.79%, ctx=13, majf=0, minf=0 00:19:52.575 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:52.575 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:52.575 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:52.575 issued rwts: total=50804,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:52.575 latency : target=0, window=0, percentile=100.00%, depth=4 00:19:52.575 filename1: (groupid=0, jobs=1): err= 0: pid=83078: Fri Dec 6 13:58:51 2024 00:19:52.575 read: IOPS=5080, BW=19.8MiB/s (20.8MB/s)(198MiB/10001msec) 00:19:52.575 slat (nsec): min=5980, max=94269, avg=15230.79, stdev=8070.12 00:19:52.575 clat (usec): min=371, max=1318, avg=746.01, stdev=76.99 00:19:52.575 lat (usec): min=377, max=1340, avg=761.24, stdev=80.00 00:19:52.575 clat percentiles (usec): 00:19:52.575 | 1.00th=[ 635], 5.00th=[ 660], 10.00th=[ 668], 20.00th=[ 685], 00:19:52.575 | 30.00th=[ 701], 40.00th=[ 709], 50.00th=[ 725], 60.00th=[ 742], 00:19:52.575 | 70.00th=[ 766], 80.00th=[ 799], 90.00th=[ 857], 95.00th=[ 898], 00:19:52.575 | 99.00th=[ 996], 99.50th=[ 1029], 99.90th=[ 1123], 99.95th=[ 1172], 00:19:52.575 | 99.99th=[ 1254] 00:19:52.575 bw ( KiB/s): min=18336, max=21728, per=50.01%, avg=20323.37, stdev=1213.51, samples=19 00:19:52.575 iops : min= 4584, max= 5432, avg=5080.84, stdev=303.38, samples=19 00:19:52.575 lat (usec) : 500=0.01%, 750=62.50%, 1000=36.60% 00:19:52.575 lat (msec) : 2=0.89% 00:19:52.575 cpu : usr=91.56%, sys=7.18%, ctx=20, majf=0, minf=0 00:19:52.575 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:52.575 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:52.575 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:52.575 issued rwts: total=50808,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:52.575 latency : target=0, window=0, percentile=100.00%, depth=4 00:19:52.575 00:19:52.575 Run status group 0 (all jobs): 00:19:52.575 READ: bw=39.7MiB/s (41.6MB/s), 19.8MiB/s-19.8MiB/s (20.8MB/s-20.8MB/s), io=397MiB (416MB), run=10001-10001msec 00:19:52.575 13:58:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:19:52.575 13:58:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:19:52.575 13:58:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:19:52.575 13:58:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:52.575 13:58:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:19:52.575 13:58:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:52.575 13:58:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.575 13:58:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:52.575 13:58:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.575 13:58:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:52.575 13:58:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.575 13:58:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:52.575 13:58:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.576 13:58:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:19:52.576 13:58:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:19:52.576 13:58:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:19:52.576 13:58:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:52.576 13:58:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.576 13:58:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:52.576 13:58:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.576 13:58:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:19:52.576 13:58:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.576 13:58:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:52.576 13:58:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.576 00:19:52.576 real 0m11.130s 00:19:52.576 user 0m19.062s 00:19:52.576 sys 0m1.713s 00:19:52.576 13:58:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:52.576 13:58:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:52.576 ************************************ 00:19:52.576 END TEST fio_dif_1_multi_subsystems 00:19:52.576 ************************************ 00:19:52.576 13:58:51 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:19:52.576 13:58:51 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:52.576 13:58:51 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:52.576 13:58:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:52.576 ************************************ 00:19:52.576 START TEST fio_dif_rand_params 00:19:52.576 ************************************ 00:19:52.576 13:58:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:19:52.576 13:58:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:19:52.576 13:58:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:19:52.576 13:58:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:19:52.576 13:58:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:19:52.576 13:58:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:19:52.576 13:58:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:19:52.576 13:58:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:19:52.576 13:58:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:19:52.576 13:58:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:19:52.576 13:58:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:19:52.576 13:58:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:19:52.576 13:58:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:19:52.576 13:58:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:19:52.576 13:58:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.576 13:58:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:52.576 bdev_null0 00:19:52.576 13:58:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.576 13:58:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:52.576 13:58:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.576 13:58:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:52.576 13:58:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.576 13:58:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:52.576 13:58:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.576 13:58:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:52.576 13:58:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.576 13:58:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:19:52.576 13:58:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.576 13:58:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:52.576 [2024-12-06 13:58:51.663450] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:52.576 13:58:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.576 13:58:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:19:52.576 13:58:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:19:52.576 13:58:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:19:52.576 13:58:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:19:52.576 13:58:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:19:52.576 13:58:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:52.576 13:58:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:52.576 { 00:19:52.576 "params": { 00:19:52.576 "name": "Nvme$subsystem", 00:19:52.576 "trtype": "$TEST_TRANSPORT", 00:19:52.576 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:52.576 "adrfam": "ipv4", 00:19:52.576 "trsvcid": "$NVMF_PORT", 00:19:52.576 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:52.576 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:52.576 "hdgst": ${hdgst:-false}, 00:19:52.576 "ddgst": ${ddgst:-false} 00:19:52.576 }, 00:19:52.576 "method": "bdev_nvme_attach_controller" 00:19:52.576 } 00:19:52.576 EOF 00:19:52.576 )") 00:19:52.576 13:58:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:52.576 13:58:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:52.576 13:58:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:19:52.576 13:58:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:52.576 13:58:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:19:52.576 13:58:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:52.576 13:58:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:19:52.576 13:58:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:19:52.576 13:58:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:52.576 13:58:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:52.576 13:58:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:19:52.576 13:58:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:52.576 13:58:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:52.576 13:58:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:52.576 13:58:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:19:52.576 13:58:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:19:52.576 13:58:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:19:52.576 13:58:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:52.576 13:58:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:19:52.576 13:58:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:19:52.576 13:58:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:19:52.576 "params": { 00:19:52.576 "name": "Nvme0", 00:19:52.576 "trtype": "tcp", 00:19:52.576 "traddr": "10.0.0.3", 00:19:52.576 "adrfam": "ipv4", 00:19:52.576 "trsvcid": "4420", 00:19:52.576 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:52.576 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:52.576 "hdgst": false, 00:19:52.576 "ddgst": false 00:19:52.576 }, 00:19:52.576 "method": "bdev_nvme_attach_controller" 00:19:52.576 }' 00:19:52.576 13:58:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:19:52.576 13:58:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:19:52.576 13:58:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:52.576 13:58:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:52.576 13:58:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:19:52.576 13:58:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:52.576 13:58:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:19:52.576 13:58:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:19:52.576 13:58:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:52.576 13:58:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:52.576 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:19:52.576 ... 00:19:52.576 fio-3.35 00:19:52.576 Starting 3 threads 00:19:59.181 00:19:59.181 filename0: (groupid=0, jobs=1): err= 0: pid=83234: Fri Dec 6 13:58:57 2024 00:19:59.181 read: IOPS=263, BW=32.9MiB/s (34.5MB/s)(165MiB/5006msec) 00:19:59.181 slat (nsec): min=6490, max=71020, avg=11130.53, stdev=5317.82 00:19:59.181 clat (usec): min=8195, max=14193, avg=11376.70, stdev=1006.59 00:19:59.181 lat (usec): min=8218, max=14206, avg=11387.83, stdev=1007.09 00:19:59.181 clat percentiles (usec): 00:19:59.181 | 1.00th=[10290], 5.00th=[10421], 10.00th=[10421], 20.00th=[10552], 00:19:59.181 | 30.00th=[10683], 40.00th=[10814], 50.00th=[10945], 60.00th=[11207], 00:19:59.181 | 70.00th=[11731], 80.00th=[12518], 90.00th=[13042], 95.00th=[13435], 00:19:59.181 | 99.00th=[13698], 99.50th=[13960], 99.90th=[14222], 99.95th=[14222], 00:19:59.181 | 99.99th=[14222] 00:19:59.181 bw ( KiB/s): min=29952, max=36096, per=33.46%, avg=33792.00, stdev=2335.78, samples=9 00:19:59.181 iops : min= 234, max= 282, avg=264.00, stdev=18.25, samples=9 00:19:59.181 lat (msec) : 10=0.23%, 20=99.77% 00:19:59.181 cpu : usr=93.27%, sys=6.25%, ctx=5, majf=0, minf=0 00:19:59.181 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:59.181 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:59.181 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:59.181 issued rwts: total=1317,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:59.181 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:59.181 filename0: (groupid=0, jobs=1): err= 0: pid=83235: Fri Dec 6 13:58:57 2024 00:19:59.181 read: IOPS=263, BW=32.9MiB/s (34.5MB/s)(165MiB/5007msec) 00:19:59.181 slat (usec): min=6, max=232, avg=13.88, stdev= 9.42 00:19:59.181 clat (usec): min=8226, max=14424, avg=11370.08, stdev=1019.40 00:19:59.181 lat (usec): min=8249, max=14497, avg=11383.96, stdev=1020.31 00:19:59.181 clat percentiles (usec): 00:19:59.181 | 1.00th=[10290], 5.00th=[10421], 10.00th=[10421], 20.00th=[10552], 00:19:59.181 | 30.00th=[10683], 40.00th=[10814], 50.00th=[10945], 60.00th=[11207], 00:19:59.181 | 70.00th=[11731], 80.00th=[12387], 90.00th=[13042], 95.00th=[13304], 00:19:59.181 | 99.00th=[13829], 99.50th=[13960], 99.90th=[14353], 99.95th=[14484], 00:19:59.181 | 99.99th=[14484] 00:19:59.181 bw ( KiB/s): min=29184, max=36096, per=33.37%, avg=33706.67, stdev=2501.74, samples=9 00:19:59.181 iops : min= 228, max= 282, avg=263.33, stdev=19.54, samples=9 00:19:59.181 lat (msec) : 10=0.46%, 20=99.54% 00:19:59.181 cpu : usr=93.99%, sys=5.45%, ctx=17, majf=0, minf=0 00:19:59.181 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:59.181 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:59.181 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:59.181 issued rwts: total=1317,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:59.181 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:59.181 filename0: (groupid=0, jobs=1): err= 0: pid=83236: Fri Dec 6 13:58:57 2024 00:19:59.181 read: IOPS=263, BW=32.9MiB/s (34.5MB/s)(165MiB/5004msec) 00:19:59.181 slat (nsec): min=6376, max=57767, avg=11523.91, stdev=6492.39 00:19:59.181 clat (usec): min=9260, max=14361, avg=11367.77, stdev=1000.80 00:19:59.181 lat (usec): min=9274, max=14390, avg=11379.30, stdev=1001.57 00:19:59.181 clat percentiles (usec): 00:19:59.181 | 1.00th=[10159], 5.00th=[10421], 10.00th=[10421], 20.00th=[10552], 00:19:59.181 | 30.00th=[10683], 40.00th=[10814], 50.00th=[10945], 60.00th=[11207], 00:19:59.181 | 70.00th=[11731], 80.00th=[12387], 90.00th=[13042], 95.00th=[13304], 00:19:59.181 | 99.00th=[13829], 99.50th=[13829], 99.90th=[14353], 99.95th=[14353], 00:19:59.181 | 99.99th=[14353] 00:19:59.181 bw ( KiB/s): min=29184, max=36864, per=33.46%, avg=33792.00, stdev=2632.57, samples=9 00:19:59.181 iops : min= 228, max= 288, avg=264.00, stdev=20.57, samples=9 00:19:59.181 lat (msec) : 10=0.46%, 20=99.54% 00:19:59.181 cpu : usr=94.64%, sys=4.82%, ctx=9, majf=0, minf=0 00:19:59.181 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:59.181 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:59.181 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:59.181 issued rwts: total=1317,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:59.181 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:59.181 00:19:59.181 Run status group 0 (all jobs): 00:19:59.181 READ: bw=98.6MiB/s (103MB/s), 32.9MiB/s-32.9MiB/s (34.5MB/s-34.5MB/s), io=494MiB (518MB), run=5004-5007msec 00:19:59.181 13:58:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:19:59.181 13:58:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:19:59.181 13:58:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:19:59.181 13:58:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:59.181 13:58:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:19:59.181 13:58:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:59.181 13:58:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.181 13:58:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:59.181 13:58:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.181 13:58:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:59.181 13:58:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.181 13:58:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:59.181 13:58:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.181 13:58:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:19:59.181 13:58:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:19:59.181 13:58:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:19:59.181 13:58:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:19:59.181 13:58:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:19:59.181 13:58:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:19:59.181 13:58:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:19:59.181 13:58:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:19:59.181 13:58:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:19:59.181 13:58:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:19:59.181 13:58:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:19:59.181 13:58:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:19:59.181 13:58:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.181 13:58:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:59.181 bdev_null0 00:19:59.181 13:58:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.181 13:58:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:59.181 13:58:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.181 13:58:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:59.181 13:58:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.181 13:58:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:59.181 13:58:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.181 13:58:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:59.181 13:58:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.181 13:58:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:19:59.181 13:58:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.181 13:58:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:59.181 [2024-12-06 13:58:57.662880] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:59.181 13:58:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.181 13:58:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:19:59.181 13:58:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:19:59.181 13:58:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:19:59.181 13:58:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:19:59.181 13:58:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.181 13:58:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:59.181 bdev_null1 00:19:59.181 13:58:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.181 13:58:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:19:59.181 13:58:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.181 13:58:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:59.182 13:58:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.182 13:58:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:19:59.182 13:58:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.182 13:58:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:59.182 13:58:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.182 13:58:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:59.182 13:58:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.182 13:58:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:59.182 13:58:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.182 13:58:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:19:59.182 13:58:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:19:59.182 13:58:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:19:59.182 13:58:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:19:59.182 13:58:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.182 13:58:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:59.182 bdev_null2 00:19:59.182 13:58:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.182 13:58:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:19:59.182 13:58:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.182 13:58:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:59.182 13:58:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.182 13:58:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:19:59.182 13:58:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.182 13:58:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:59.182 13:58:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.182 13:58:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:19:59.182 13:58:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.182 13:58:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:59.182 13:58:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.182 13:58:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:19:59.182 13:58:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:19:59.182 13:58:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:19:59.182 13:58:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:19:59.182 13:58:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:19:59.182 13:58:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:59.182 13:58:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:59.182 { 00:19:59.182 "params": { 00:19:59.182 "name": "Nvme$subsystem", 00:19:59.182 "trtype": "$TEST_TRANSPORT", 00:19:59.182 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:59.182 "adrfam": "ipv4", 00:19:59.182 "trsvcid": "$NVMF_PORT", 00:19:59.182 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:59.182 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:59.182 "hdgst": ${hdgst:-false}, 00:19:59.182 "ddgst": ${ddgst:-false} 00:19:59.182 }, 00:19:59.182 "method": "bdev_nvme_attach_controller" 00:19:59.182 } 00:19:59.182 EOF 00:19:59.182 )") 00:19:59.182 13:58:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:59.182 13:58:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:19:59.182 13:58:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:59.182 13:58:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:19:59.182 13:58:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:19:59.182 13:58:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:59.182 13:58:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:59.182 13:58:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:59.182 13:58:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:59.182 13:58:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:19:59.182 13:58:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:59.182 13:58:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:19:59.182 13:58:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:59.182 13:58:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:19:59.182 13:58:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:19:59.182 13:58:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:19:59.182 13:58:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:59.182 13:58:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:19:59.182 13:58:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:59.182 13:58:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:59.182 13:58:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:59.182 { 00:19:59.182 "params": { 00:19:59.182 "name": "Nvme$subsystem", 00:19:59.182 "trtype": "$TEST_TRANSPORT", 00:19:59.182 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:59.182 "adrfam": "ipv4", 00:19:59.182 "trsvcid": "$NVMF_PORT", 00:19:59.182 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:59.182 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:59.182 "hdgst": ${hdgst:-false}, 00:19:59.182 "ddgst": ${ddgst:-false} 00:19:59.182 }, 00:19:59.182 "method": "bdev_nvme_attach_controller" 00:19:59.182 } 00:19:59.182 EOF 00:19:59.182 )") 00:19:59.182 13:58:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:19:59.182 13:58:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:19:59.182 13:58:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:19:59.182 13:58:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:19:59.182 13:58:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:19:59.182 13:58:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:19:59.182 13:58:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:59.182 13:58:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:59.182 { 00:19:59.182 "params": { 00:19:59.182 "name": "Nvme$subsystem", 00:19:59.182 "trtype": "$TEST_TRANSPORT", 00:19:59.182 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:59.182 "adrfam": "ipv4", 00:19:59.182 "trsvcid": "$NVMF_PORT", 00:19:59.182 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:59.182 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:59.182 "hdgst": ${hdgst:-false}, 00:19:59.182 "ddgst": ${ddgst:-false} 00:19:59.182 }, 00:19:59.182 "method": "bdev_nvme_attach_controller" 00:19:59.182 } 00:19:59.182 EOF 00:19:59.182 )") 00:19:59.182 13:58:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:19:59.182 13:58:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:19:59.182 13:58:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:19:59.182 13:58:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:19:59.182 "params": { 00:19:59.182 "name": "Nvme0", 00:19:59.182 "trtype": "tcp", 00:19:59.182 "traddr": "10.0.0.3", 00:19:59.182 "adrfam": "ipv4", 00:19:59.182 "trsvcid": "4420", 00:19:59.182 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:59.182 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:59.182 "hdgst": false, 00:19:59.182 "ddgst": false 00:19:59.182 }, 00:19:59.182 "method": "bdev_nvme_attach_controller" 00:19:59.182 },{ 00:19:59.182 "params": { 00:19:59.182 "name": "Nvme1", 00:19:59.182 "trtype": "tcp", 00:19:59.182 "traddr": "10.0.0.3", 00:19:59.182 "adrfam": "ipv4", 00:19:59.182 "trsvcid": "4420", 00:19:59.182 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:59.182 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:59.182 "hdgst": false, 00:19:59.182 "ddgst": false 00:19:59.182 }, 00:19:59.182 "method": "bdev_nvme_attach_controller" 00:19:59.182 },{ 00:19:59.182 "params": { 00:19:59.182 "name": "Nvme2", 00:19:59.182 "trtype": "tcp", 00:19:59.182 "traddr": "10.0.0.3", 00:19:59.182 "adrfam": "ipv4", 00:19:59.182 "trsvcid": "4420", 00:19:59.182 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:59.182 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:59.182 "hdgst": false, 00:19:59.182 "ddgst": false 00:19:59.182 }, 00:19:59.182 "method": "bdev_nvme_attach_controller" 00:19:59.182 }' 00:19:59.182 13:58:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:19:59.182 13:58:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:19:59.182 13:58:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:59.183 13:58:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:59.183 13:58:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:19:59.183 13:58:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:59.183 13:58:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:19:59.183 13:58:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:19:59.183 13:58:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:59.183 13:58:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:59.183 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:19:59.183 ... 00:19:59.183 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:19:59.183 ... 00:19:59.183 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:19:59.183 ... 00:19:59.183 fio-3.35 00:19:59.183 Starting 24 threads 00:20:11.397 00:20:11.397 filename0: (groupid=0, jobs=1): err= 0: pid=83331: Fri Dec 6 13:59:08 2024 00:20:11.397 read: IOPS=229, BW=918KiB/s (940kB/s)(9224KiB/10050msec) 00:20:11.397 slat (usec): min=6, max=8024, avg=24.26, stdev=235.80 00:20:11.397 clat (msec): min=10, max=184, avg=69.55, stdev=24.48 00:20:11.397 lat (msec): min=10, max=184, avg=69.58, stdev=24.48 00:20:11.397 clat percentiles (msec): 00:20:11.397 | 1.00th=[ 14], 5.00th=[ 24], 10.00th=[ 36], 20.00th=[ 50], 00:20:11.397 | 30.00th=[ 59], 40.00th=[ 67], 50.00th=[ 70], 60.00th=[ 73], 00:20:11.397 | 70.00th=[ 81], 80.00th=[ 90], 90.00th=[ 103], 95.00th=[ 108], 00:20:11.397 | 99.00th=[ 127], 99.50th=[ 140], 99.90th=[ 184], 99.95th=[ 184], 00:20:11.397 | 99.99th=[ 184] 00:20:11.397 bw ( KiB/s): min= 640, max= 2104, per=4.08%, avg=915.55, stdev=298.80, samples=20 00:20:11.397 iops : min= 160, max= 526, avg=228.85, stdev=74.68, samples=20 00:20:11.397 lat (msec) : 20=2.43%, 50=17.95%, 100=68.26%, 250=11.36% 00:20:11.397 cpu : usr=33.48%, sys=1.13%, ctx=920, majf=0, minf=9 00:20:11.397 IO depths : 1=0.2%, 2=0.5%, 4=1.4%, 8=81.2%, 16=16.8%, 32=0.0%, >=64=0.0% 00:20:11.397 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.397 complete : 0=0.0%, 4=88.2%, 8=11.4%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.397 issued rwts: total=2306,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:11.397 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:11.397 filename0: (groupid=0, jobs=1): err= 0: pid=83332: Fri Dec 6 13:59:08 2024 00:20:11.397 read: IOPS=225, BW=903KiB/s (925kB/s)(9072KiB/10043msec) 00:20:11.397 slat (usec): min=6, max=8030, avg=30.17, stdev=286.81 00:20:11.397 clat (msec): min=12, max=212, avg=70.65, stdev=25.78 00:20:11.397 lat (msec): min=12, max=212, avg=70.68, stdev=25.79 00:20:11.397 clat percentiles (msec): 00:20:11.397 | 1.00th=[ 14], 5.00th=[ 24], 10.00th=[ 37], 20.00th=[ 50], 00:20:11.397 | 30.00th=[ 61], 40.00th=[ 67], 50.00th=[ 72], 60.00th=[ 74], 00:20:11.397 | 70.00th=[ 82], 80.00th=[ 92], 90.00th=[ 106], 95.00th=[ 109], 00:20:11.397 | 99.00th=[ 132], 99.50th=[ 165], 99.90th=[ 178], 99.95th=[ 178], 00:20:11.397 | 99.99th=[ 213] 00:20:11.397 bw ( KiB/s): min= 544, max= 2048, per=4.02%, avg=902.40, stdev=290.99, samples=20 00:20:11.397 iops : min= 136, max= 512, avg=225.60, stdev=72.75, samples=20 00:20:11.397 lat (msec) : 20=1.41%, 50=19.40%, 100=67.15%, 250=12.04% 00:20:11.397 cpu : usr=34.79%, sys=1.14%, ctx=990, majf=0, minf=9 00:20:11.397 IO depths : 1=0.1%, 2=0.9%, 4=4.0%, 8=78.6%, 16=16.4%, 32=0.0%, >=64=0.0% 00:20:11.397 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.397 complete : 0=0.0%, 4=88.9%, 8=10.2%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.397 issued rwts: total=2268,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:11.397 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:11.397 filename0: (groupid=0, jobs=1): err= 0: pid=83333: Fri Dec 6 13:59:08 2024 00:20:11.397 read: IOPS=234, BW=939KiB/s (961kB/s)(9416KiB/10031msec) 00:20:11.397 slat (usec): min=4, max=4055, avg=26.85, stdev=154.17 00:20:11.397 clat (msec): min=14, max=184, avg=68.03, stdev=24.10 00:20:11.397 lat (msec): min=14, max=184, avg=68.06, stdev=24.10 00:20:11.397 clat percentiles (msec): 00:20:11.397 | 1.00th=[ 16], 5.00th=[ 26], 10.00th=[ 39], 20.00th=[ 48], 00:20:11.397 | 30.00th=[ 57], 40.00th=[ 63], 50.00th=[ 69], 60.00th=[ 72], 00:20:11.397 | 70.00th=[ 78], 80.00th=[ 88], 90.00th=[ 102], 95.00th=[ 107], 00:20:11.397 | 99.00th=[ 122], 99.50th=[ 136], 99.90th=[ 186], 99.95th=[ 186], 00:20:11.397 | 99.99th=[ 186] 00:20:11.397 bw ( KiB/s): min= 664, max= 2032, per=4.17%, avg=934.90, stdev=278.73, samples=20 00:20:11.397 iops : min= 166, max= 508, avg=233.70, stdev=69.69, samples=20 00:20:11.397 lat (msec) : 20=2.17%, 50=21.24%, 100=65.68%, 250=10.92% 00:20:11.397 cpu : usr=41.98%, sys=1.36%, ctx=1383, majf=0, minf=9 00:20:11.397 IO depths : 1=0.1%, 2=0.3%, 4=0.8%, 8=82.3%, 16=16.5%, 32=0.0%, >=64=0.0% 00:20:11.397 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.397 complete : 0=0.0%, 4=87.7%, 8=12.1%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.397 issued rwts: total=2354,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:11.397 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:11.397 filename0: (groupid=0, jobs=1): err= 0: pid=83334: Fri Dec 6 13:59:08 2024 00:20:11.397 read: IOPS=228, BW=914KiB/s (936kB/s)(9168KiB/10033msec) 00:20:11.397 slat (usec): min=3, max=8041, avg=39.80, stdev=374.14 00:20:11.397 clat (msec): min=12, max=184, avg=69.79, stdev=23.99 00:20:11.397 lat (msec): min=12, max=184, avg=69.83, stdev=23.98 00:20:11.397 clat percentiles (msec): 00:20:11.397 | 1.00th=[ 23], 5.00th=[ 31], 10.00th=[ 41], 20.00th=[ 48], 00:20:11.397 | 30.00th=[ 59], 40.00th=[ 63], 50.00th=[ 70], 60.00th=[ 72], 00:20:11.397 | 70.00th=[ 79], 80.00th=[ 90], 90.00th=[ 106], 95.00th=[ 111], 00:20:11.397 | 99.00th=[ 126], 99.50th=[ 146], 99.90th=[ 184], 99.95th=[ 184], 00:20:11.397 | 99.99th=[ 184] 00:20:11.397 bw ( KiB/s): min= 640, max= 1673, per=4.06%, avg=911.35, stdev=214.92, samples=20 00:20:11.397 iops : min= 160, max= 418, avg=227.75, stdev=53.70, samples=20 00:20:11.397 lat (msec) : 20=0.70%, 50=21.86%, 100=64.53%, 250=12.91% 00:20:11.397 cpu : usr=36.00%, sys=1.40%, ctx=1188, majf=0, minf=9 00:20:11.397 IO depths : 1=0.2%, 2=1.4%, 4=5.5%, 8=77.4%, 16=15.5%, 32=0.0%, >=64=0.0% 00:20:11.397 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.397 complete : 0=0.0%, 4=88.7%, 8=10.1%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.397 issued rwts: total=2292,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:11.397 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:11.397 filename0: (groupid=0, jobs=1): err= 0: pid=83335: Fri Dec 6 13:59:08 2024 00:20:11.397 read: IOPS=235, BW=941KiB/s (964kB/s)(9412KiB/10001msec) 00:20:11.397 slat (usec): min=4, max=12041, avg=41.38, stdev=394.38 00:20:11.397 clat (usec): min=309, max=184635, avg=67827.05, stdev=23421.91 00:20:11.397 lat (usec): min=322, max=184653, avg=67868.44, stdev=23428.58 00:20:11.397 clat percentiles (msec): 00:20:11.397 | 1.00th=[ 3], 5.00th=[ 33], 10.00th=[ 43], 20.00th=[ 48], 00:20:11.397 | 30.00th=[ 56], 40.00th=[ 63], 50.00th=[ 68], 60.00th=[ 72], 00:20:11.397 | 70.00th=[ 78], 80.00th=[ 84], 90.00th=[ 101], 95.00th=[ 108], 00:20:11.397 | 99.00th=[ 132], 99.50th=[ 138], 99.90th=[ 186], 99.95th=[ 186], 00:20:11.397 | 99.99th=[ 186] 00:20:11.397 bw ( KiB/s): min= 688, max= 1367, per=4.09%, avg=918.68, stdev=151.07, samples=19 00:20:11.397 iops : min= 172, max= 341, avg=229.63, stdev=37.64, samples=19 00:20:11.397 lat (usec) : 500=0.04% 00:20:11.397 lat (msec) : 2=0.42%, 4=0.76%, 10=0.42%, 20=0.59%, 50=21.97% 00:20:11.397 lat (msec) : 100=65.79%, 250=9.99% 00:20:11.397 cpu : usr=35.60%, sys=1.40%, ctx=1066, majf=0, minf=9 00:20:11.397 IO depths : 1=0.1%, 2=0.6%, 4=2.6%, 8=81.0%, 16=15.8%, 32=0.0%, >=64=0.0% 00:20:11.398 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.398 complete : 0=0.0%, 4=87.7%, 8=11.6%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.398 issued rwts: total=2353,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:11.398 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:11.398 filename0: (groupid=0, jobs=1): err= 0: pid=83336: Fri Dec 6 13:59:08 2024 00:20:11.398 read: IOPS=237, BW=951KiB/s (973kB/s)(9516KiB/10010msec) 00:20:11.398 slat (usec): min=3, max=8038, avg=40.22, stdev=378.23 00:20:11.398 clat (msec): min=22, max=186, avg=67.15, stdev=22.27 00:20:11.398 lat (msec): min=22, max=186, avg=67.19, stdev=22.27 00:20:11.398 clat percentiles (msec): 00:20:11.398 | 1.00th=[ 26], 5.00th=[ 35], 10.00th=[ 40], 20.00th=[ 48], 00:20:11.398 | 30.00th=[ 55], 40.00th=[ 61], 50.00th=[ 67], 60.00th=[ 71], 00:20:11.398 | 70.00th=[ 75], 80.00th=[ 84], 90.00th=[ 100], 95.00th=[ 107], 00:20:11.398 | 99.00th=[ 116], 99.50th=[ 138], 99.90th=[ 186], 99.95th=[ 186], 00:20:11.398 | 99.99th=[ 186] 00:20:11.398 bw ( KiB/s): min= 720, max= 1680, per=4.23%, avg=948.11, stdev=208.20, samples=19 00:20:11.398 iops : min= 180, max= 420, avg=237.00, stdev=52.05, samples=19 00:20:11.398 lat (msec) : 50=25.26%, 100=65.20%, 250=9.54% 00:20:11.398 cpu : usr=33.55%, sys=1.06%, ctx=951, majf=0, minf=9 00:20:11.398 IO depths : 1=0.1%, 2=0.4%, 4=1.5%, 8=82.1%, 16=15.9%, 32=0.0%, >=64=0.0% 00:20:11.398 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.398 complete : 0=0.0%, 4=87.4%, 8=12.3%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.398 issued rwts: total=2379,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:11.398 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:11.398 filename0: (groupid=0, jobs=1): err= 0: pid=83337: Fri Dec 6 13:59:08 2024 00:20:11.398 read: IOPS=237, BW=949KiB/s (972kB/s)(9500KiB/10006msec) 00:20:11.398 slat (usec): min=5, max=8051, avg=44.42, stdev=410.11 00:20:11.398 clat (msec): min=10, max=189, avg=67.23, stdev=22.23 00:20:11.398 lat (msec): min=10, max=189, avg=67.27, stdev=22.22 00:20:11.398 clat percentiles (msec): 00:20:11.398 | 1.00th=[ 26], 5.00th=[ 32], 10.00th=[ 40], 20.00th=[ 48], 00:20:11.398 | 30.00th=[ 57], 40.00th=[ 61], 50.00th=[ 69], 60.00th=[ 72], 00:20:11.398 | 70.00th=[ 74], 80.00th=[ 84], 90.00th=[ 99], 95.00th=[ 107], 00:20:11.398 | 99.00th=[ 118], 99.50th=[ 142], 99.90th=[ 190], 99.95th=[ 190], 00:20:11.398 | 99.99th=[ 190] 00:20:11.398 bw ( KiB/s): min= 720, max= 1536, per=4.19%, avg=940.74, stdev=173.91, samples=19 00:20:11.398 iops : min= 180, max= 384, avg=235.16, stdev=43.46, samples=19 00:20:11.398 lat (msec) : 20=0.72%, 50=24.51%, 100=66.11%, 250=8.67% 00:20:11.398 cpu : usr=33.22%, sys=0.97%, ctx=890, majf=0, minf=9 00:20:11.398 IO depths : 1=0.1%, 2=0.7%, 4=2.9%, 8=80.7%, 16=15.6%, 32=0.0%, >=64=0.0% 00:20:11.398 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.398 complete : 0=0.0%, 4=87.7%, 8=11.7%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.398 issued rwts: total=2375,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:11.398 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:11.398 filename0: (groupid=0, jobs=1): err= 0: pid=83338: Fri Dec 6 13:59:08 2024 00:20:11.398 read: IOPS=238, BW=953KiB/s (976kB/s)(9556KiB/10026msec) 00:20:11.398 slat (usec): min=3, max=8034, avg=42.17, stdev=363.03 00:20:11.398 clat (msec): min=21, max=190, avg=66.94, stdev=22.62 00:20:11.398 lat (msec): min=21, max=190, avg=66.98, stdev=22.62 00:20:11.398 clat percentiles (msec): 00:20:11.398 | 1.00th=[ 24], 5.00th=[ 34], 10.00th=[ 40], 20.00th=[ 47], 00:20:11.398 | 30.00th=[ 53], 40.00th=[ 61], 50.00th=[ 68], 60.00th=[ 71], 00:20:11.398 | 70.00th=[ 75], 80.00th=[ 84], 90.00th=[ 99], 95.00th=[ 107], 00:20:11.398 | 99.00th=[ 120], 99.50th=[ 138], 99.90th=[ 190], 99.95th=[ 190], 00:20:11.398 | 99.99th=[ 190] 00:20:11.398 bw ( KiB/s): min= 688, max= 1624, per=4.24%, avg=951.00, stdev=198.56, samples=20 00:20:11.398 iops : min= 172, max= 406, avg=237.70, stdev=49.63, samples=20 00:20:11.398 lat (msec) : 50=26.29%, 100=64.34%, 250=9.38% 00:20:11.398 cpu : usr=40.60%, sys=1.56%, ctx=1439, majf=0, minf=9 00:20:11.398 IO depths : 1=0.1%, 2=0.6%, 4=2.0%, 8=81.6%, 16=15.7%, 32=0.0%, >=64=0.0% 00:20:11.398 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.398 complete : 0=0.0%, 4=87.5%, 8=12.1%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.398 issued rwts: total=2389,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:11.398 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:11.398 filename1: (groupid=0, jobs=1): err= 0: pid=83339: Fri Dec 6 13:59:08 2024 00:20:11.398 read: IOPS=247, BW=989KiB/s (1013kB/s)(9940KiB/10051msec) 00:20:11.398 slat (usec): min=4, max=8058, avg=22.73, stdev=184.64 00:20:11.398 clat (usec): min=1364, max=187561, avg=64471.50, stdev=28688.17 00:20:11.398 lat (usec): min=1374, max=187579, avg=64494.23, stdev=28688.24 00:20:11.398 clat percentiles (usec): 00:20:11.398 | 1.00th=[ 1483], 5.00th=[ 6456], 10.00th=[ 19792], 20.00th=[ 46924], 00:20:11.398 | 30.00th=[ 55313], 40.00th=[ 61080], 50.00th=[ 68682], 60.00th=[ 71828], 00:20:11.398 | 70.00th=[ 73925], 80.00th=[ 84411], 90.00th=[102237], 95.00th=[107480], 00:20:11.398 | 99.00th=[122160], 99.50th=[162530], 99.90th=[187696], 99.95th=[187696], 00:20:11.398 | 99.99th=[187696] 00:20:11.398 bw ( KiB/s): min= 688, max= 3368, per=4.41%, avg=990.00, stdev=569.63, samples=20 00:20:11.398 iops : min= 172, max= 842, avg=247.50, stdev=142.41, samples=20 00:20:11.398 lat (msec) : 2=3.86%, 4=0.64%, 10=0.80%, 20=4.71%, 50=16.98% 00:20:11.398 lat (msec) : 100=62.41%, 250=10.58% 00:20:11.398 cpu : usr=33.30%, sys=1.13%, ctx=933, majf=0, minf=0 00:20:11.398 IO depths : 1=0.4%, 2=1.0%, 4=2.5%, 8=80.0%, 16=16.2%, 32=0.0%, >=64=0.0% 00:20:11.398 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.398 complete : 0=0.0%, 4=88.3%, 8=11.2%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.398 issued rwts: total=2485,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:11.398 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:11.398 filename1: (groupid=0, jobs=1): err= 0: pid=83340: Fri Dec 6 13:59:08 2024 00:20:11.398 read: IOPS=245, BW=982KiB/s (1005kB/s)(9828KiB/10009msec) 00:20:11.398 slat (usec): min=5, max=8035, avg=54.78, stdev=438.19 00:20:11.398 clat (msec): min=13, max=181, avg=64.93, stdev=22.67 00:20:11.398 lat (msec): min=13, max=181, avg=64.99, stdev=22.67 00:20:11.398 clat percentiles (msec): 00:20:11.398 | 1.00th=[ 17], 5.00th=[ 33], 10.00th=[ 40], 20.00th=[ 46], 00:20:11.398 | 30.00th=[ 51], 40.00th=[ 59], 50.00th=[ 65], 60.00th=[ 70], 00:20:11.398 | 70.00th=[ 73], 80.00th=[ 82], 90.00th=[ 97], 95.00th=[ 107], 00:20:11.398 | 99.00th=[ 118], 99.50th=[ 136], 99.90th=[ 182], 99.95th=[ 182], 00:20:11.398 | 99.99th=[ 182] 00:20:11.398 bw ( KiB/s): min= 712, max= 1832, per=4.36%, avg=978.95, stdev=237.41, samples=19 00:20:11.398 iops : min= 178, max= 458, avg=244.74, stdev=59.35, samples=19 00:20:11.398 lat (msec) : 20=1.38%, 50=28.45%, 100=62.15%, 250=8.02% 00:20:11.398 cpu : usr=41.65%, sys=1.53%, ctx=1367, majf=0, minf=9 00:20:11.398 IO depths : 1=0.1%, 2=0.2%, 4=1.1%, 8=83.0%, 16=15.7%, 32=0.0%, >=64=0.0% 00:20:11.398 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.398 complete : 0=0.0%, 4=87.0%, 8=12.8%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.398 issued rwts: total=2457,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:11.398 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:11.398 filename1: (groupid=0, jobs=1): err= 0: pid=83341: Fri Dec 6 13:59:08 2024 00:20:11.398 read: IOPS=231, BW=927KiB/s (949kB/s)(9288KiB/10020msec) 00:20:11.398 slat (usec): min=5, max=7975, avg=35.23, stdev=262.73 00:20:11.398 clat (msec): min=14, max=184, avg=68.89, stdev=22.31 00:20:11.398 lat (msec): min=14, max=184, avg=68.92, stdev=22.31 00:20:11.398 clat percentiles (msec): 00:20:11.398 | 1.00th=[ 26], 5.00th=[ 35], 10.00th=[ 44], 20.00th=[ 49], 00:20:11.398 | 30.00th=[ 57], 40.00th=[ 64], 50.00th=[ 68], 60.00th=[ 72], 00:20:11.398 | 70.00th=[ 77], 80.00th=[ 87], 90.00th=[ 101], 95.00th=[ 108], 00:20:11.398 | 99.00th=[ 118], 99.50th=[ 138], 99.90th=[ 186], 99.95th=[ 186], 00:20:11.398 | 99.99th=[ 186] 00:20:11.398 bw ( KiB/s): min= 664, max= 1576, per=4.11%, avg=922.10, stdev=187.38, samples=20 00:20:11.398 iops : min= 166, max= 394, avg=230.50, stdev=46.84, samples=20 00:20:11.398 lat (msec) : 20=0.09%, 50=23.43%, 100=66.15%, 250=10.34% 00:20:11.398 cpu : usr=40.24%, sys=1.41%, ctx=1154, majf=0, minf=9 00:20:11.398 IO depths : 1=0.1%, 2=0.6%, 4=1.9%, 8=81.3%, 16=16.1%, 32=0.0%, >=64=0.0% 00:20:11.398 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.398 complete : 0=0.0%, 4=87.8%, 8=11.8%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.398 issued rwts: total=2322,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:11.398 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:11.398 filename1: (groupid=0, jobs=1): err= 0: pid=83342: Fri Dec 6 13:59:08 2024 00:20:11.398 read: IOPS=234, BW=937KiB/s (959kB/s)(9404KiB/10037msec) 00:20:11.398 slat (usec): min=4, max=12056, avg=40.12, stdev=382.47 00:20:11.398 clat (msec): min=15, max=185, avg=68.05, stdev=22.61 00:20:11.398 lat (msec): min=15, max=185, avg=68.09, stdev=22.61 00:20:11.398 clat percentiles (msec): 00:20:11.398 | 1.00th=[ 22], 5.00th=[ 32], 10.00th=[ 42], 20.00th=[ 48], 00:20:11.398 | 30.00th=[ 55], 40.00th=[ 63], 50.00th=[ 68], 60.00th=[ 72], 00:20:11.398 | 70.00th=[ 77], 80.00th=[ 87], 90.00th=[ 101], 95.00th=[ 108], 00:20:11.398 | 99.00th=[ 125], 99.50th=[ 138], 99.90th=[ 186], 99.95th=[ 186], 00:20:11.398 | 99.99th=[ 186] 00:20:11.398 bw ( KiB/s): min= 664, max= 1648, per=4.17%, avg=936.30, stdev=196.92, samples=20 00:20:11.398 iops : min= 166, max= 412, avg=234.05, stdev=49.23, samples=20 00:20:11.398 lat (msec) : 20=0.68%, 50=24.29%, 100=65.33%, 250=9.70% 00:20:11.398 cpu : usr=42.51%, sys=1.52%, ctx=1404, majf=0, minf=9 00:20:11.398 IO depths : 1=0.1%, 2=0.6%, 4=2.2%, 8=81.2%, 16=16.0%, 32=0.0%, >=64=0.0% 00:20:11.398 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.398 complete : 0=0.0%, 4=87.8%, 8=11.7%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.398 issued rwts: total=2351,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:11.398 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:11.398 filename1: (groupid=0, jobs=1): err= 0: pid=83343: Fri Dec 6 13:59:08 2024 00:20:11.398 read: IOPS=229, BW=916KiB/s (938kB/s)(9172KiB/10012msec) 00:20:11.398 slat (usec): min=5, max=8060, avg=25.77, stdev=187.96 00:20:11.398 clat (msec): min=16, max=184, avg=69.74, stdev=22.40 00:20:11.398 lat (msec): min=16, max=184, avg=69.77, stdev=22.40 00:20:11.398 clat percentiles (msec): 00:20:11.398 | 1.00th=[ 22], 5.00th=[ 33], 10.00th=[ 44], 20.00th=[ 49], 00:20:11.399 | 30.00th=[ 59], 40.00th=[ 65], 50.00th=[ 70], 60.00th=[ 72], 00:20:11.399 | 70.00th=[ 79], 80.00th=[ 87], 90.00th=[ 103], 95.00th=[ 108], 00:20:11.399 | 99.00th=[ 123], 99.50th=[ 138], 99.90th=[ 186], 99.95th=[ 186], 00:20:11.399 | 99.99th=[ 186] 00:20:11.399 bw ( KiB/s): min= 672, max= 1608, per=4.06%, avg=910.63, stdev=195.42, samples=19 00:20:11.399 iops : min= 168, max= 402, avg=227.63, stdev=48.86, samples=19 00:20:11.399 lat (msec) : 20=0.57%, 50=20.72%, 100=66.77%, 250=11.95% 00:20:11.399 cpu : usr=36.55%, sys=1.32%, ctx=1153, majf=0, minf=9 00:20:11.399 IO depths : 1=0.1%, 2=0.3%, 4=1.1%, 8=81.9%, 16=16.6%, 32=0.0%, >=64=0.0% 00:20:11.399 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.399 complete : 0=0.0%, 4=87.9%, 8=11.9%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.399 issued rwts: total=2293,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:11.399 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:11.399 filename1: (groupid=0, jobs=1): err= 0: pid=83344: Fri Dec 6 13:59:08 2024 00:20:11.399 read: IOPS=227, BW=910KiB/s (932kB/s)(9148KiB/10051msec) 00:20:11.399 slat (usec): min=6, max=8026, avg=28.29, stdev=251.36 00:20:11.399 clat (msec): min=7, max=179, avg=70.14, stdev=25.59 00:20:11.399 lat (msec): min=7, max=179, avg=70.17, stdev=25.59 00:20:11.399 clat percentiles (msec): 00:20:11.399 | 1.00th=[ 12], 5.00th=[ 24], 10.00th=[ 33], 20.00th=[ 50], 00:20:11.399 | 30.00th=[ 61], 40.00th=[ 68], 50.00th=[ 71], 60.00th=[ 72], 00:20:11.399 | 70.00th=[ 82], 80.00th=[ 92], 90.00th=[ 106], 95.00th=[ 109], 00:20:11.399 | 99.00th=[ 132], 99.50th=[ 133], 99.90th=[ 180], 99.95th=[ 180], 00:20:11.399 | 99.99th=[ 180] 00:20:11.399 bw ( KiB/s): min= 656, max= 2176, per=4.05%, avg=908.00, stdev=311.85, samples=20 00:20:11.399 iops : min= 164, max= 544, avg=227.00, stdev=77.96, samples=20 00:20:11.399 lat (msec) : 10=0.70%, 20=2.80%, 50=16.75%, 100=67.16%, 250=12.59% 00:20:11.399 cpu : usr=33.27%, sys=1.10%, ctx=901, majf=0, minf=9 00:20:11.399 IO depths : 1=0.1%, 2=1.2%, 4=4.5%, 8=77.9%, 16=16.4%, 32=0.0%, >=64=0.0% 00:20:11.399 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.399 complete : 0=0.0%, 4=89.1%, 8=10.0%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.399 issued rwts: total=2287,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:11.399 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:11.399 filename1: (groupid=0, jobs=1): err= 0: pid=83345: Fri Dec 6 13:59:08 2024 00:20:11.399 read: IOPS=241, BW=964KiB/s (987kB/s)(9676KiB/10035msec) 00:20:11.399 slat (usec): min=5, max=4020, avg=28.17, stdev=152.10 00:20:11.399 clat (msec): min=20, max=184, avg=66.21, stdev=22.52 00:20:11.399 lat (msec): min=20, max=184, avg=66.24, stdev=22.52 00:20:11.399 clat percentiles (msec): 00:20:11.399 | 1.00th=[ 23], 5.00th=[ 33], 10.00th=[ 39], 20.00th=[ 48], 00:20:11.399 | 30.00th=[ 53], 40.00th=[ 61], 50.00th=[ 67], 60.00th=[ 70], 00:20:11.399 | 70.00th=[ 74], 80.00th=[ 82], 90.00th=[ 99], 95.00th=[ 106], 00:20:11.399 | 99.00th=[ 122], 99.50th=[ 142], 99.90th=[ 186], 99.95th=[ 186], 00:20:11.399 | 99.99th=[ 186] 00:20:11.399 bw ( KiB/s): min= 720, max= 1738, per=4.28%, avg=960.20, stdev=206.70, samples=20 00:20:11.399 iops : min= 180, max= 434, avg=240.00, stdev=51.58, samples=20 00:20:11.399 lat (msec) : 50=27.37%, 100=63.87%, 250=8.76% 00:20:11.399 cpu : usr=38.64%, sys=1.29%, ctx=1256, majf=0, minf=9 00:20:11.399 IO depths : 1=0.1%, 2=0.5%, 4=1.9%, 8=81.7%, 16=15.7%, 32=0.0%, >=64=0.0% 00:20:11.399 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.399 complete : 0=0.0%, 4=87.5%, 8=12.1%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.399 issued rwts: total=2419,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:11.399 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:11.399 filename1: (groupid=0, jobs=1): err= 0: pid=83346: Fri Dec 6 13:59:08 2024 00:20:11.399 read: IOPS=240, BW=960KiB/s (983kB/s)(9608KiB/10004msec) 00:20:11.399 slat (usec): min=4, max=8075, avg=26.83, stdev=183.98 00:20:11.399 clat (msec): min=3, max=178, avg=66.50, stdev=22.17 00:20:11.399 lat (msec): min=3, max=178, avg=66.52, stdev=22.17 00:20:11.399 clat percentiles (msec): 00:20:11.399 | 1.00th=[ 24], 5.00th=[ 35], 10.00th=[ 43], 20.00th=[ 48], 00:20:11.399 | 30.00th=[ 53], 40.00th=[ 61], 50.00th=[ 67], 60.00th=[ 70], 00:20:11.399 | 70.00th=[ 74], 80.00th=[ 83], 90.00th=[ 99], 95.00th=[ 106], 00:20:11.399 | 99.00th=[ 118], 99.50th=[ 142], 99.90th=[ 178], 99.95th=[ 178], 00:20:11.399 | 99.99th=[ 178] 00:20:11.399 bw ( KiB/s): min= 712, max= 1424, per=4.23%, avg=949.11, stdev=157.18, samples=19 00:20:11.399 iops : min= 178, max= 356, avg=237.21, stdev=39.26, samples=19 00:20:11.399 lat (msec) : 4=0.12%, 10=0.25%, 20=0.29%, 50=27.27%, 100=62.86% 00:20:11.399 lat (msec) : 250=9.20% 00:20:11.399 cpu : usr=33.40%, sys=1.36%, ctx=962, majf=0, minf=9 00:20:11.399 IO depths : 1=0.1%, 2=0.6%, 4=2.2%, 8=81.5%, 16=15.6%, 32=0.0%, >=64=0.0% 00:20:11.399 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.399 complete : 0=0.0%, 4=87.4%, 8=12.1%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.399 issued rwts: total=2402,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:11.399 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:11.399 filename2: (groupid=0, jobs=1): err= 0: pid=83347: Fri Dec 6 13:59:08 2024 00:20:11.399 read: IOPS=226, BW=905KiB/s (926kB/s)(9060KiB/10015msec) 00:20:11.399 slat (usec): min=5, max=4034, avg=26.14, stdev=143.64 00:20:11.399 clat (msec): min=17, max=201, avg=70.58, stdev=26.72 00:20:11.399 lat (msec): min=17, max=201, avg=70.61, stdev=26.71 00:20:11.399 clat percentiles (msec): 00:20:11.399 | 1.00th=[ 24], 5.00th=[ 31], 10.00th=[ 42], 20.00th=[ 48], 00:20:11.399 | 30.00th=[ 54], 40.00th=[ 63], 50.00th=[ 68], 60.00th=[ 73], 00:20:11.399 | 70.00th=[ 79], 80.00th=[ 92], 90.00th=[ 108], 95.00th=[ 121], 00:20:11.399 | 99.00th=[ 140], 99.50th=[ 155], 99.90th=[ 182], 99.95th=[ 182], 00:20:11.399 | 99.99th=[ 203] 00:20:11.399 bw ( KiB/s): min= 512, max= 1776, per=4.02%, avg=902.30, stdev=263.52, samples=20 00:20:11.399 iops : min= 128, max= 444, avg=225.55, stdev=65.87, samples=20 00:20:11.399 lat (msec) : 20=0.18%, 50=22.74%, 100=62.16%, 250=14.92% 00:20:11.399 cpu : usr=46.56%, sys=1.56%, ctx=1536, majf=0, minf=9 00:20:11.399 IO depths : 1=0.1%, 2=1.7%, 4=6.8%, 8=76.2%, 16=15.2%, 32=0.0%, >=64=0.0% 00:20:11.399 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.399 complete : 0=0.0%, 4=89.0%, 8=9.5%, 16=1.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.399 issued rwts: total=2265,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:11.399 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:11.399 filename2: (groupid=0, jobs=1): err= 0: pid=83348: Fri Dec 6 13:59:08 2024 00:20:11.399 read: IOPS=229, BW=919KiB/s (941kB/s)(9224KiB/10041msec) 00:20:11.399 slat (usec): min=6, max=8044, avg=40.15, stdev=371.42 00:20:11.399 clat (msec): min=11, max=189, avg=69.41, stdev=22.96 00:20:11.399 lat (msec): min=11, max=190, avg=69.45, stdev=22.96 00:20:11.399 clat percentiles (msec): 00:20:11.399 | 1.00th=[ 23], 5.00th=[ 31], 10.00th=[ 41], 20.00th=[ 50], 00:20:11.399 | 30.00th=[ 59], 40.00th=[ 65], 50.00th=[ 69], 60.00th=[ 73], 00:20:11.399 | 70.00th=[ 80], 80.00th=[ 87], 90.00th=[ 103], 95.00th=[ 108], 00:20:11.399 | 99.00th=[ 120], 99.50th=[ 142], 99.90th=[ 190], 99.95th=[ 190], 00:20:11.399 | 99.99th=[ 190] 00:20:11.399 bw ( KiB/s): min= 664, max= 1824, per=4.09%, avg=918.40, stdev=235.80, samples=20 00:20:11.399 iops : min= 166, max= 456, avg=229.60, stdev=58.95, samples=20 00:20:11.399 lat (msec) : 20=0.09%, 50=21.29%, 100=67.52%, 250=11.10% 00:20:11.399 cpu : usr=35.10%, sys=1.20%, ctx=1074, majf=0, minf=9 00:20:11.399 IO depths : 1=0.2%, 2=0.6%, 4=1.7%, 8=81.1%, 16=16.5%, 32=0.0%, >=64=0.0% 00:20:11.399 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.399 complete : 0=0.0%, 4=88.1%, 8=11.5%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.399 issued rwts: total=2306,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:11.399 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:11.399 filename2: (groupid=0, jobs=1): err= 0: pid=83349: Fri Dec 6 13:59:08 2024 00:20:11.399 read: IOPS=224, BW=899KiB/s (921kB/s)(9036KiB/10048msec) 00:20:11.399 slat (usec): min=5, max=4030, avg=25.10, stdev=146.01 00:20:11.399 clat (msec): min=13, max=184, avg=70.95, stdev=27.09 00:20:11.399 lat (msec): min=14, max=184, avg=70.97, stdev=27.10 00:20:11.399 clat percentiles (msec): 00:20:11.399 | 1.00th=[ 19], 5.00th=[ 25], 10.00th=[ 39], 20.00th=[ 48], 00:20:11.399 | 30.00th=[ 57], 40.00th=[ 65], 50.00th=[ 70], 60.00th=[ 73], 00:20:11.399 | 70.00th=[ 82], 80.00th=[ 95], 90.00th=[ 107], 95.00th=[ 115], 00:20:11.399 | 99.00th=[ 146], 99.50th=[ 146], 99.90th=[ 186], 99.95th=[ 186], 00:20:11.399 | 99.99th=[ 186] 00:20:11.399 bw ( KiB/s): min= 512, max= 1920, per=4.00%, avg=897.20, stdev=279.29, samples=20 00:20:11.399 iops : min= 128, max= 480, avg=224.30, stdev=69.82, samples=20 00:20:11.399 lat (msec) : 20=1.77%, 50=21.60%, 100=60.96%, 250=15.67% 00:20:11.399 cpu : usr=40.71%, sys=1.39%, ctx=1165, majf=0, minf=0 00:20:11.399 IO depths : 1=0.1%, 2=1.9%, 4=7.2%, 8=75.5%, 16=15.4%, 32=0.0%, >=64=0.0% 00:20:11.399 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.399 complete : 0=0.0%, 4=89.4%, 8=9.1%, 16=1.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.399 issued rwts: total=2259,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:11.399 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:11.399 filename2: (groupid=0, jobs=1): err= 0: pid=83350: Fri Dec 6 13:59:08 2024 00:20:11.399 read: IOPS=236, BW=945KiB/s (968kB/s)(9504KiB/10053msec) 00:20:11.399 slat (usec): min=6, max=8024, avg=24.33, stdev=184.45 00:20:11.399 clat (msec): min=6, max=190, avg=67.54, stdev=25.65 00:20:11.399 lat (msec): min=6, max=190, avg=67.57, stdev=25.65 00:20:11.399 clat percentiles (msec): 00:20:11.399 | 1.00th=[ 8], 5.00th=[ 24], 10.00th=[ 33], 20.00th=[ 48], 00:20:11.399 | 30.00th=[ 57], 40.00th=[ 63], 50.00th=[ 70], 60.00th=[ 72], 00:20:11.399 | 70.00th=[ 79], 80.00th=[ 87], 90.00th=[ 105], 95.00th=[ 108], 00:20:11.399 | 99.00th=[ 121], 99.50th=[ 146], 99.90th=[ 190], 99.95th=[ 190], 00:20:11.399 | 99.99th=[ 190] 00:20:11.399 bw ( KiB/s): min= 688, max= 2400, per=4.21%, avg=943.55, stdev=357.88, samples=20 00:20:11.399 iops : min= 172, max= 600, avg=235.85, stdev=89.46, samples=20 00:20:11.399 lat (msec) : 10=1.26%, 20=2.36%, 50=20.92%, 100=63.80%, 250=11.66% 00:20:11.399 cpu : usr=35.06%, sys=1.22%, ctx=997, majf=0, minf=9 00:20:11.399 IO depths : 1=0.1%, 2=0.3%, 4=1.4%, 8=81.5%, 16=16.7%, 32=0.0%, >=64=0.0% 00:20:11.399 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.399 complete : 0=0.0%, 4=88.1%, 8=11.5%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.399 issued rwts: total=2376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:11.400 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:11.400 filename2: (groupid=0, jobs=1): err= 0: pid=83351: Fri Dec 6 13:59:08 2024 00:20:11.400 read: IOPS=231, BW=926KiB/s (948kB/s)(9300KiB/10043msec) 00:20:11.400 slat (usec): min=5, max=8047, avg=37.37, stdev=335.00 00:20:11.400 clat (msec): min=10, max=185, avg=68.84, stdev=24.21 00:20:11.400 lat (msec): min=10, max=185, avg=68.88, stdev=24.21 00:20:11.400 clat percentiles (msec): 00:20:11.400 | 1.00th=[ 15], 5.00th=[ 26], 10.00th=[ 36], 20.00th=[ 49], 00:20:11.400 | 30.00th=[ 59], 40.00th=[ 65], 50.00th=[ 70], 60.00th=[ 72], 00:20:11.400 | 70.00th=[ 79], 80.00th=[ 89], 90.00th=[ 103], 95.00th=[ 108], 00:20:11.400 | 99.00th=[ 126], 99.50th=[ 140], 99.90th=[ 186], 99.95th=[ 186], 00:20:11.400 | 99.99th=[ 186] 00:20:11.400 bw ( KiB/s): min= 688, max= 2079, per=4.12%, avg=925.85, stdev=287.84, samples=20 00:20:11.400 iops : min= 172, max= 519, avg=231.40, stdev=71.80, samples=20 00:20:11.400 lat (msec) : 20=1.76%, 50=20.65%, 100=66.24%, 250=11.35% 00:20:11.400 cpu : usr=42.01%, sys=1.30%, ctx=1261, majf=0, minf=9 00:20:11.400 IO depths : 1=0.2%, 2=0.6%, 4=1.6%, 8=81.1%, 16=16.6%, 32=0.0%, >=64=0.0% 00:20:11.400 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.400 complete : 0=0.0%, 4=88.1%, 8=11.5%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.400 issued rwts: total=2325,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:11.400 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:11.400 filename2: (groupid=0, jobs=1): err= 0: pid=83352: Fri Dec 6 13:59:08 2024 00:20:11.400 read: IOPS=237, BW=950KiB/s (973kB/s)(9520KiB/10020msec) 00:20:11.400 slat (usec): min=3, max=8040, avg=48.50, stdev=436.84 00:20:11.400 clat (msec): min=22, max=179, avg=67.16, stdev=22.07 00:20:11.400 lat (msec): min=22, max=179, avg=67.21, stdev=22.07 00:20:11.400 clat percentiles (msec): 00:20:11.400 | 1.00th=[ 26], 5.00th=[ 36], 10.00th=[ 42], 20.00th=[ 48], 00:20:11.400 | 30.00th=[ 54], 40.00th=[ 61], 50.00th=[ 67], 60.00th=[ 71], 00:20:11.400 | 70.00th=[ 77], 80.00th=[ 84], 90.00th=[ 99], 95.00th=[ 107], 00:20:11.400 | 99.00th=[ 122], 99.50th=[ 133], 99.90th=[ 180], 99.95th=[ 180], 00:20:11.400 | 99.99th=[ 180] 00:20:11.400 bw ( KiB/s): min= 688, max= 1592, per=4.22%, avg=946.70, stdev=183.69, samples=20 00:20:11.400 iops : min= 172, max= 398, avg=236.65, stdev=45.92, samples=20 00:20:11.400 lat (msec) : 50=26.34%, 100=64.62%, 250=9.03% 00:20:11.400 cpu : usr=34.63%, sys=1.12%, ctx=968, majf=0, minf=9 00:20:11.400 IO depths : 1=0.1%, 2=0.4%, 4=1.5%, 8=82.1%, 16=15.9%, 32=0.0%, >=64=0.0% 00:20:11.400 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.400 complete : 0=0.0%, 4=87.4%, 8=12.3%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.400 issued rwts: total=2380,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:11.400 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:11.400 filename2: (groupid=0, jobs=1): err= 0: pid=83353: Fri Dec 6 13:59:08 2024 00:20:11.400 read: IOPS=238, BW=952KiB/s (975kB/s)(9540KiB/10016msec) 00:20:11.400 slat (usec): min=4, max=15051, avg=34.69, stdev=363.75 00:20:11.400 clat (msec): min=21, max=183, avg=67.02, stdev=22.65 00:20:11.400 lat (msec): min=21, max=183, avg=67.05, stdev=22.65 00:20:11.400 clat percentiles (msec): 00:20:11.400 | 1.00th=[ 24], 5.00th=[ 34], 10.00th=[ 41], 20.00th=[ 48], 00:20:11.400 | 30.00th=[ 54], 40.00th=[ 62], 50.00th=[ 67], 60.00th=[ 72], 00:20:11.400 | 70.00th=[ 74], 80.00th=[ 84], 90.00th=[ 99], 95.00th=[ 108], 00:20:11.400 | 99.00th=[ 121], 99.50th=[ 142], 99.90th=[ 184], 99.95th=[ 184], 00:20:11.400 | 99.99th=[ 184] 00:20:11.400 bw ( KiB/s): min= 688, max= 1808, per=4.23%, avg=949.10, stdev=229.75, samples=20 00:20:11.400 iops : min= 172, max= 452, avg=237.25, stdev=57.44, samples=20 00:20:11.400 lat (msec) : 50=27.00%, 100=63.77%, 250=9.22% 00:20:11.400 cpu : usr=42.48%, sys=1.57%, ctx=1153, majf=0, minf=9 00:20:11.400 IO depths : 1=0.1%, 2=0.5%, 4=1.5%, 8=82.0%, 16=15.9%, 32=0.0%, >=64=0.0% 00:20:11.400 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.400 complete : 0=0.0%, 4=87.5%, 8=12.2%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.400 issued rwts: total=2385,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:11.400 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:11.400 filename2: (groupid=0, jobs=1): err= 0: pid=83354: Fri Dec 6 13:59:08 2024 00:20:11.400 read: IOPS=232, BW=929KiB/s (952kB/s)(9320KiB/10029msec) 00:20:11.400 slat (usec): min=4, max=8041, avg=28.14, stdev=204.42 00:20:11.400 clat (msec): min=15, max=184, avg=68.71, stdev=22.39 00:20:11.400 lat (msec): min=15, max=184, avg=68.74, stdev=22.39 00:20:11.400 clat percentiles (msec): 00:20:11.400 | 1.00th=[ 31], 5.00th=[ 32], 10.00th=[ 42], 20.00th=[ 49], 00:20:11.400 | 30.00th=[ 57], 40.00th=[ 64], 50.00th=[ 68], 60.00th=[ 72], 00:20:11.400 | 70.00th=[ 75], 80.00th=[ 85], 90.00th=[ 103], 95.00th=[ 108], 00:20:11.400 | 99.00th=[ 127], 99.50th=[ 138], 99.90th=[ 186], 99.95th=[ 186], 00:20:11.400 | 99.99th=[ 186] 00:20:11.400 bw ( KiB/s): min= 688, max= 1536, per=4.12%, avg=925.50, stdev=177.59, samples=20 00:20:11.400 iops : min= 172, max= 384, avg=231.35, stdev=44.40, samples=20 00:20:11.400 lat (msec) : 20=0.17%, 50=22.58%, 100=65.62%, 250=11.63% 00:20:11.400 cpu : usr=39.46%, sys=1.41%, ctx=1469, majf=0, minf=9 00:20:11.400 IO depths : 1=0.2%, 2=0.9%, 4=3.2%, 8=80.1%, 16=15.7%, 32=0.0%, >=64=0.0% 00:20:11.400 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.400 complete : 0=0.0%, 4=87.9%, 8=11.4%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.400 issued rwts: total=2330,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:11.400 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:11.400 00:20:11.400 Run status group 0 (all jobs): 00:20:11.400 READ: bw=21.9MiB/s (23.0MB/s), 899KiB/s-989KiB/s (921kB/s-1013kB/s), io=220MiB (231MB), run=10001-10053msec 00:20:11.400 13:59:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:20:11.400 13:59:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:20:11.400 13:59:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:11.400 13:59:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:11.400 13:59:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:20:11.400 13:59:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:11.400 13:59:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.400 13:59:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:11.400 13:59:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.400 13:59:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:11.400 13:59:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.400 13:59:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:11.400 13:59:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.400 13:59:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:11.400 13:59:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:20:11.400 13:59:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:20:11.400 13:59:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:11.400 13:59:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.400 13:59:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:11.400 13:59:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.400 13:59:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:20:11.400 13:59:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.400 13:59:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:11.400 13:59:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.400 13:59:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:11.400 13:59:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:20:11.400 13:59:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:20:11.400 13:59:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:11.400 13:59:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.400 13:59:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:11.400 13:59:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.400 13:59:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:20:11.400 13:59:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.400 13:59:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:11.400 13:59:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.400 13:59:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:20:11.400 13:59:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:20:11.400 13:59:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:20:11.400 13:59:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:20:11.400 13:59:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:20:11.400 13:59:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:20:11.400 13:59:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:20:11.400 13:59:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:20:11.400 13:59:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:11.400 13:59:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:20:11.400 13:59:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:20:11.400 13:59:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:20:11.400 13:59:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.400 13:59:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:11.400 bdev_null0 00:20:11.400 13:59:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.400 13:59:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:11.400 13:59:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.400 13:59:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:11.400 13:59:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.400 13:59:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:11.401 13:59:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.401 13:59:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:11.401 13:59:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.401 13:59:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:11.401 13:59:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.401 13:59:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:11.401 [2024-12-06 13:59:09.058753] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:11.401 13:59:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.401 13:59:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:11.401 13:59:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:20:11.401 13:59:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:20:11.401 13:59:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:20:11.401 13:59:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.401 13:59:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:11.401 bdev_null1 00:20:11.401 13:59:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.401 13:59:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:11.401 13:59:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.401 13:59:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:11.401 13:59:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.401 13:59:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:11.401 13:59:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.401 13:59:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:11.401 13:59:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.401 13:59:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:11.401 13:59:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.401 13:59:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:11.401 13:59:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.401 13:59:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:20:11.401 13:59:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:20:11.401 13:59:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:20:11.401 13:59:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:20:11.401 13:59:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:20:11.401 13:59:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:11.401 13:59:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:11.401 { 00:20:11.401 "params": { 00:20:11.401 "name": "Nvme$subsystem", 00:20:11.401 "trtype": "$TEST_TRANSPORT", 00:20:11.401 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:11.401 "adrfam": "ipv4", 00:20:11.401 "trsvcid": "$NVMF_PORT", 00:20:11.401 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:11.401 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:11.401 "hdgst": ${hdgst:-false}, 00:20:11.401 "ddgst": ${ddgst:-false} 00:20:11.401 }, 00:20:11.401 "method": "bdev_nvme_attach_controller" 00:20:11.401 } 00:20:11.401 EOF 00:20:11.401 )") 00:20:11.401 13:59:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:11.401 13:59:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:20:11.401 13:59:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:11.401 13:59:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:20:11.401 13:59:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:11.401 13:59:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:20:11.401 13:59:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:20:11.401 13:59:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:11.401 13:59:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:11.401 13:59:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:11.401 13:59:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:20:11.401 13:59:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:11.401 13:59:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:11.401 13:59:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:11.401 13:59:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:11.401 { 00:20:11.401 "params": { 00:20:11.401 "name": "Nvme$subsystem", 00:20:11.401 "trtype": "$TEST_TRANSPORT", 00:20:11.401 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:11.401 "adrfam": "ipv4", 00:20:11.401 "trsvcid": "$NVMF_PORT", 00:20:11.401 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:11.401 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:11.401 "hdgst": ${hdgst:-false}, 00:20:11.401 "ddgst": ${ddgst:-false} 00:20:11.401 }, 00:20:11.401 "method": "bdev_nvme_attach_controller" 00:20:11.401 } 00:20:11.401 EOF 00:20:11.401 )") 00:20:11.401 13:59:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:20:11.401 13:59:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:20:11.401 13:59:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:11.401 13:59:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:11.401 13:59:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:20:11.401 13:59:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:20:11.401 13:59:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:11.401 13:59:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:20:11.401 13:59:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:20:11.401 13:59:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:11.401 "params": { 00:20:11.401 "name": "Nvme0", 00:20:11.401 "trtype": "tcp", 00:20:11.401 "traddr": "10.0.0.3", 00:20:11.401 "adrfam": "ipv4", 00:20:11.401 "trsvcid": "4420", 00:20:11.401 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:11.401 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:11.401 "hdgst": false, 00:20:11.401 "ddgst": false 00:20:11.401 }, 00:20:11.401 "method": "bdev_nvme_attach_controller" 00:20:11.401 },{ 00:20:11.401 "params": { 00:20:11.401 "name": "Nvme1", 00:20:11.401 "trtype": "tcp", 00:20:11.401 "traddr": "10.0.0.3", 00:20:11.401 "adrfam": "ipv4", 00:20:11.401 "trsvcid": "4420", 00:20:11.401 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:11.401 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:11.401 "hdgst": false, 00:20:11.401 "ddgst": false 00:20:11.401 }, 00:20:11.401 "method": "bdev_nvme_attach_controller" 00:20:11.401 }' 00:20:11.401 13:59:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:20:11.401 13:59:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:11.401 13:59:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:11.401 13:59:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:11.401 13:59:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:11.401 13:59:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:11.401 13:59:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:20:11.401 13:59:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:11.401 13:59:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:11.401 13:59:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:11.401 13:59:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:11.401 13:59:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:11.401 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:20:11.401 ... 00:20:11.401 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:20:11.401 ... 00:20:11.401 fio-3.35 00:20:11.401 Starting 4 threads 00:20:15.597 00:20:15.597 filename0: (groupid=0, jobs=1): err= 0: pid=83495: Fri Dec 6 13:59:14 2024 00:20:15.597 read: IOPS=2238, BW=17.5MiB/s (18.3MB/s)(87.5MiB/5001msec) 00:20:15.597 slat (usec): min=3, max=105, avg=21.66, stdev=12.15 00:20:15.597 clat (usec): min=380, max=8484, avg=3503.22, stdev=1005.85 00:20:15.597 lat (usec): min=392, max=8530, avg=3524.89, stdev=1007.67 00:20:15.597 clat percentiles (usec): 00:20:15.597 | 1.00th=[ 1188], 5.00th=[ 1811], 10.00th=[ 1958], 20.00th=[ 2376], 00:20:15.597 | 30.00th=[ 3163], 40.00th=[ 3392], 50.00th=[ 3654], 60.00th=[ 3884], 00:20:15.597 | 70.00th=[ 4113], 80.00th=[ 4359], 90.00th=[ 4686], 95.00th=[ 5014], 00:20:15.597 | 99.00th=[ 5604], 99.50th=[ 5735], 99.90th=[ 6128], 99.95th=[ 6259], 00:20:15.597 | 99.99th=[ 6390] 00:20:15.597 bw ( KiB/s): min=12985, max=22304, per=25.36%, avg=18079.22, stdev=2761.20, samples=9 00:20:15.597 iops : min= 1623, max= 2788, avg=2259.89, stdev=345.18, samples=9 00:20:15.597 lat (usec) : 500=0.02%, 750=0.01%, 1000=0.18% 00:20:15.597 lat (msec) : 2=10.54%, 4=54.48%, 10=34.77% 00:20:15.597 cpu : usr=95.26%, sys=3.90%, ctx=55, majf=0, minf=0 00:20:15.597 IO depths : 1=1.3%, 2=10.9%, 4=57.8%, 8=30.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:15.597 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:15.597 complete : 0=0.0%, 4=95.8%, 8=4.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:15.597 issued rwts: total=11194,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:15.597 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:15.597 filename0: (groupid=0, jobs=1): err= 0: pid=83496: Fri Dec 6 13:59:14 2024 00:20:15.597 read: IOPS=2281, BW=17.8MiB/s (18.7MB/s)(89.1MiB/5001msec) 00:20:15.597 slat (usec): min=5, max=159, avg=20.23, stdev=12.05 00:20:15.597 clat (usec): min=493, max=7851, avg=3445.50, stdev=945.00 00:20:15.597 lat (usec): min=505, max=7875, avg=3465.73, stdev=946.64 00:20:15.597 clat percentiles (usec): 00:20:15.597 | 1.00th=[ 1352], 5.00th=[ 1876], 10.00th=[ 1991], 20.00th=[ 2376], 00:20:15.597 | 30.00th=[ 3097], 40.00th=[ 3326], 50.00th=[ 3589], 60.00th=[ 3818], 00:20:15.597 | 70.00th=[ 4080], 80.00th=[ 4293], 90.00th=[ 4555], 95.00th=[ 4752], 00:20:15.597 | 99.00th=[ 5276], 99.50th=[ 5473], 99.90th=[ 5866], 99.95th=[ 6194], 00:20:15.597 | 99.99th=[ 7177] 00:20:15.597 bw ( KiB/s): min=14768, max=21152, per=25.90%, avg=18458.67, stdev=2278.13, samples=9 00:20:15.597 iops : min= 1846, max= 2644, avg=2307.33, stdev=284.77, samples=9 00:20:15.597 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.07% 00:20:15.597 lat (msec) : 2=10.16%, 4=57.20%, 10=32.55% 00:20:15.597 cpu : usr=94.26%, sys=4.76%, ctx=35, majf=0, minf=0 00:20:15.597 IO depths : 1=1.3%, 2=8.7%, 4=59.0%, 8=31.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:15.597 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:15.597 complete : 0=0.0%, 4=96.6%, 8=3.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:15.597 issued rwts: total=11409,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:15.597 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:15.597 filename1: (groupid=0, jobs=1): err= 0: pid=83497: Fri Dec 6 13:59:14 2024 00:20:15.597 read: IOPS=2201, BW=17.2MiB/s (18.0MB/s)(86.0MiB/5001msec) 00:20:15.597 slat (usec): min=4, max=106, avg=22.46, stdev=11.84 00:20:15.597 clat (usec): min=402, max=8546, avg=3560.24, stdev=938.72 00:20:15.597 lat (usec): min=413, max=8560, avg=3582.71, stdev=939.41 00:20:15.597 clat percentiles (usec): 00:20:15.597 | 1.00th=[ 1434], 5.00th=[ 1893], 10.00th=[ 2147], 20.00th=[ 2540], 00:20:15.597 | 30.00th=[ 3228], 40.00th=[ 3523], 50.00th=[ 3752], 60.00th=[ 3949], 00:20:15.597 | 70.00th=[ 4113], 80.00th=[ 4293], 90.00th=[ 4555], 95.00th=[ 4817], 00:20:15.597 | 99.00th=[ 5604], 99.50th=[ 5735], 99.90th=[ 6456], 99.95th=[ 8094], 00:20:15.597 | 99.99th=[ 8094] 00:20:15.597 bw ( KiB/s): min=15520, max=20704, per=24.88%, avg=17733.33, stdev=1543.25, samples=9 00:20:15.597 iops : min= 1940, max= 2588, avg=2216.67, stdev=192.91, samples=9 00:20:15.597 lat (usec) : 500=0.01%, 750=0.03%, 1000=0.05% 00:20:15.598 lat (msec) : 2=7.00%, 4=55.58%, 10=37.33% 00:20:15.598 cpu : usr=95.72%, sys=3.46%, ctx=7, majf=0, minf=0 00:20:15.598 IO depths : 1=1.7%, 2=11.4%, 4=57.5%, 8=29.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:15.598 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:15.598 complete : 0=0.0%, 4=95.6%, 8=4.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:15.598 issued rwts: total=11011,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:15.598 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:15.598 filename1: (groupid=0, jobs=1): err= 0: pid=83498: Fri Dec 6 13:59:14 2024 00:20:15.598 read: IOPS=2192, BW=17.1MiB/s (18.0MB/s)(85.7MiB/5004msec) 00:20:15.598 slat (usec): min=6, max=109, avg=21.83, stdev=11.81 00:20:15.598 clat (usec): min=608, max=7769, avg=3578.66, stdev=941.30 00:20:15.598 lat (usec): min=621, max=7792, avg=3600.49, stdev=943.14 00:20:15.598 clat percentiles (usec): 00:20:15.598 | 1.00th=[ 1172], 5.00th=[ 1844], 10.00th=[ 2212], 20.00th=[ 2606], 00:20:15.598 | 30.00th=[ 3294], 40.00th=[ 3556], 50.00th=[ 3818], 60.00th=[ 3982], 00:20:15.598 | 70.00th=[ 4146], 80.00th=[ 4293], 90.00th=[ 4555], 95.00th=[ 4817], 00:20:15.598 | 99.00th=[ 5538], 99.50th=[ 5735], 99.90th=[ 6259], 99.95th=[ 6325], 00:20:15.598 | 99.99th=[ 6456] 00:20:15.598 bw ( KiB/s): min=14962, max=20112, per=24.16%, avg=17223.33, stdev=1625.84, samples=9 00:20:15.598 iops : min= 1870, max= 2514, avg=2152.89, stdev=203.27, samples=9 00:20:15.598 lat (usec) : 750=0.01%, 1000=0.15% 00:20:15.598 lat (msec) : 2=6.30%, 4=54.49%, 10=39.06% 00:20:15.598 cpu : usr=94.42%, sys=4.64%, ctx=7, majf=0, minf=0 00:20:15.598 IO depths : 1=1.7%, 2=11.2%, 4=57.7%, 8=29.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:15.598 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:15.598 complete : 0=0.0%, 4=95.7%, 8=4.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:15.598 issued rwts: total=10969,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:15.598 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:15.598 00:20:15.598 Run status group 0 (all jobs): 00:20:15.598 READ: bw=69.6MiB/s (73.0MB/s), 17.1MiB/s-17.8MiB/s (18.0MB/s-18.7MB/s), io=348MiB (365MB), run=5001-5004msec 00:20:15.857 13:59:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:20:15.857 13:59:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:20:15.857 13:59:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:15.857 13:59:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:15.857 13:59:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:20:15.857 13:59:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:15.857 13:59:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.857 13:59:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:15.857 13:59:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.857 13:59:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:15.857 13:59:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.857 13:59:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:15.857 13:59:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.857 13:59:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:15.857 13:59:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:20:15.857 13:59:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:20:15.857 13:59:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:15.857 13:59:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.857 13:59:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:15.857 13:59:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.857 13:59:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:20:15.857 13:59:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.857 13:59:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:15.857 ************************************ 00:20:15.857 END TEST fio_dif_rand_params 00:20:15.857 ************************************ 00:20:15.857 13:59:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.857 00:20:15.857 real 0m23.596s 00:20:15.857 user 2m6.309s 00:20:15.857 sys 0m5.722s 00:20:15.857 13:59:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:15.857 13:59:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:16.115 13:59:15 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:20:16.115 13:59:15 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:16.115 13:59:15 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:16.115 13:59:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:16.115 ************************************ 00:20:16.115 START TEST fio_dif_digest 00:20:16.115 ************************************ 00:20:16.115 13:59:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:20:16.115 13:59:15 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:20:16.115 13:59:15 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:20:16.115 13:59:15 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:20:16.115 13:59:15 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:20:16.115 13:59:15 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:20:16.115 13:59:15 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:20:16.115 13:59:15 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:20:16.115 13:59:15 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:20:16.115 13:59:15 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:20:16.115 13:59:15 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:20:16.115 13:59:15 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:20:16.115 13:59:15 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:20:16.115 13:59:15 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:20:16.115 13:59:15 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:20:16.115 13:59:15 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:20:16.115 13:59:15 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:20:16.115 13:59:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.115 13:59:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:16.115 bdev_null0 00:20:16.115 13:59:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.115 13:59:15 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:16.115 13:59:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.115 13:59:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:16.115 13:59:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.115 13:59:15 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:16.115 13:59:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.115 13:59:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:16.115 13:59:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.115 13:59:15 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:16.115 13:59:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.115 13:59:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:16.115 [2024-12-06 13:59:15.319423] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:16.115 13:59:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.115 13:59:15 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:20:16.115 13:59:15 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:20:16.115 13:59:15 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:20:16.115 13:59:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:20:16.115 13:59:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:20:16.115 13:59:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:16.115 13:59:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:16.115 { 00:20:16.115 "params": { 00:20:16.115 "name": "Nvme$subsystem", 00:20:16.115 "trtype": "$TEST_TRANSPORT", 00:20:16.115 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:16.115 "adrfam": "ipv4", 00:20:16.115 "trsvcid": "$NVMF_PORT", 00:20:16.115 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:16.115 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:16.115 "hdgst": ${hdgst:-false}, 00:20:16.115 "ddgst": ${ddgst:-false} 00:20:16.115 }, 00:20:16.115 "method": "bdev_nvme_attach_controller" 00:20:16.115 } 00:20:16.115 EOF 00:20:16.115 )") 00:20:16.115 13:59:15 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:16.115 13:59:15 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:20:16.115 13:59:15 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:20:16.115 13:59:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:16.115 13:59:15 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:20:16.115 13:59:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:16.115 13:59:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:16.115 13:59:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:16.115 13:59:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:16.115 13:59:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:20:16.115 13:59:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:20:16.115 13:59:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:16.115 13:59:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:16.115 13:59:15 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:20:16.115 13:59:15 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:20:16.115 13:59:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:16.115 13:59:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:20:16.115 13:59:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:16.115 13:59:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:20:16.115 13:59:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:20:16.115 13:59:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:16.115 "params": { 00:20:16.115 "name": "Nvme0", 00:20:16.115 "trtype": "tcp", 00:20:16.115 "traddr": "10.0.0.3", 00:20:16.115 "adrfam": "ipv4", 00:20:16.115 "trsvcid": "4420", 00:20:16.115 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:16.115 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:16.115 "hdgst": true, 00:20:16.116 "ddgst": true 00:20:16.116 }, 00:20:16.116 "method": "bdev_nvme_attach_controller" 00:20:16.116 }' 00:20:16.116 13:59:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:16.116 13:59:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:16.116 13:59:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:16.116 13:59:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:16.116 13:59:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:20:16.116 13:59:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:16.116 13:59:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:16.116 13:59:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:16.116 13:59:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:16.116 13:59:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:16.374 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:20:16.374 ... 00:20:16.374 fio-3.35 00:20:16.374 Starting 3 threads 00:20:28.614 00:20:28.614 filename0: (groupid=0, jobs=1): err= 0: pid=83609: Fri Dec 6 13:59:26 2024 00:20:28.614 read: IOPS=254, BW=31.8MiB/s (33.4MB/s)(318MiB/10003msec) 00:20:28.614 slat (usec): min=6, max=119, avg=27.68, stdev=14.03 00:20:28.614 clat (usec): min=10712, max=15504, avg=11722.02, stdev=1064.70 00:20:28.614 lat (usec): min=10726, max=15571, avg=11749.70, stdev=1065.66 00:20:28.614 clat percentiles (usec): 00:20:28.614 | 1.00th=[10814], 5.00th=[10814], 10.00th=[10814], 20.00th=[10945], 00:20:28.614 | 30.00th=[10945], 40.00th=[11076], 50.00th=[11207], 60.00th=[11600], 00:20:28.614 | 70.00th=[11863], 80.00th=[12387], 90.00th=[13566], 95.00th=[14091], 00:20:28.614 | 99.00th=[14746], 99.50th=[15008], 99.90th=[15401], 99.95th=[15533], 00:20:28.614 | 99.99th=[15533] 00:20:28.614 bw ( KiB/s): min=26880, max=35328, per=33.24%, avg=32495.37, stdev=2651.97, samples=19 00:20:28.614 iops : min= 210, max= 276, avg=253.84, stdev=20.75, samples=19 00:20:28.614 lat (msec) : 20=100.00% 00:20:28.614 cpu : usr=95.14%, sys=4.31%, ctx=107, majf=0, minf=0 00:20:28.614 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:28.614 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:28.614 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:28.614 issued rwts: total=2547,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:28.614 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:28.614 filename0: (groupid=0, jobs=1): err= 0: pid=83610: Fri Dec 6 13:59:26 2024 00:20:28.614 read: IOPS=254, BW=31.8MiB/s (33.4MB/s)(319MiB/10008msec) 00:20:28.614 slat (usec): min=6, max=115, avg=26.44, stdev=14.01 00:20:28.614 clat (usec): min=6107, max=15520, avg=11717.54, stdev=1084.71 00:20:28.614 lat (usec): min=6117, max=15551, avg=11743.98, stdev=1084.36 00:20:28.614 clat percentiles (usec): 00:20:28.614 | 1.00th=[10814], 5.00th=[10814], 10.00th=[10814], 20.00th=[10945], 00:20:28.614 | 30.00th=[10945], 40.00th=[11076], 50.00th=[11207], 60.00th=[11600], 00:20:28.614 | 70.00th=[11863], 80.00th=[12387], 90.00th=[13566], 95.00th=[14091], 00:20:28.614 | 99.00th=[14746], 99.50th=[15008], 99.90th=[15533], 99.95th=[15533], 00:20:28.614 | 99.99th=[15533] 00:20:28.614 bw ( KiB/s): min=26112, max=35328, per=33.28%, avg=32538.95, stdev=2735.85, samples=19 00:20:28.614 iops : min= 204, max= 276, avg=254.21, stdev=21.37, samples=19 00:20:28.614 lat (msec) : 10=0.12%, 20=99.88% 00:20:28.614 cpu : usr=94.59%, sys=4.90%, ctx=50, majf=0, minf=0 00:20:28.614 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:28.614 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:28.614 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:28.614 issued rwts: total=2550,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:28.614 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:28.614 filename0: (groupid=0, jobs=1): err= 0: pid=83611: Fri Dec 6 13:59:26 2024 00:20:28.614 read: IOPS=254, BW=31.8MiB/s (33.4MB/s)(318MiB/10002msec) 00:20:28.614 slat (usec): min=4, max=112, avg=24.47, stdev=13.29 00:20:28.614 clat (usec): min=10729, max=15506, avg=11727.81, stdev=1062.79 00:20:28.614 lat (usec): min=10751, max=15573, avg=11752.27, stdev=1064.27 00:20:28.614 clat percentiles (usec): 00:20:28.614 | 1.00th=[10814], 5.00th=[10814], 10.00th=[10814], 20.00th=[10945], 00:20:28.614 | 30.00th=[10945], 40.00th=[11076], 50.00th=[11207], 60.00th=[11600], 00:20:28.614 | 70.00th=[11863], 80.00th=[12387], 90.00th=[13566], 95.00th=[14091], 00:20:28.614 | 99.00th=[14746], 99.50th=[15008], 99.90th=[15401], 99.95th=[15533], 00:20:28.614 | 99.99th=[15533] 00:20:28.614 bw ( KiB/s): min=26880, max=35328, per=33.24%, avg=32498.53, stdev=2648.74, samples=19 00:20:28.614 iops : min= 210, max= 276, avg=253.89, stdev=20.69, samples=19 00:20:28.614 lat (msec) : 20=100.00% 00:20:28.614 cpu : usr=96.19%, sys=3.32%, ctx=15, majf=0, minf=0 00:20:28.614 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:28.614 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:28.614 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:28.614 issued rwts: total=2547,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:28.615 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:28.615 00:20:28.615 Run status group 0 (all jobs): 00:20:28.615 READ: bw=95.5MiB/s (100MB/s), 31.8MiB/s-31.8MiB/s (33.4MB/s-33.4MB/s), io=956MiB (1002MB), run=10002-10008msec 00:20:28.615 13:59:26 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:20:28.615 13:59:26 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:20:28.615 13:59:26 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:20:28.615 13:59:26 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:28.615 13:59:26 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:20:28.615 13:59:26 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:28.615 13:59:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.615 13:59:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:28.615 13:59:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.615 13:59:26 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:28.615 13:59:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.615 13:59:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:28.615 13:59:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.615 ************************************ 00:20:28.615 END TEST fio_dif_digest 00:20:28.615 ************************************ 00:20:28.615 00:20:28.615 real 0m11.160s 00:20:28.615 user 0m29.370s 00:20:28.615 sys 0m1.566s 00:20:28.615 13:59:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:28.615 13:59:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:28.615 13:59:26 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:20:28.615 13:59:26 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:20:28.615 13:59:26 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:28.615 13:59:26 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:20:28.615 13:59:26 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:28.615 13:59:26 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:20:28.615 13:59:26 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:28.615 13:59:26 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:28.615 rmmod nvme_tcp 00:20:28.615 rmmod nvme_fabrics 00:20:28.615 rmmod nvme_keyring 00:20:28.615 13:59:26 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:28.615 13:59:26 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:20:28.615 13:59:26 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:20:28.615 13:59:26 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 82853 ']' 00:20:28.615 13:59:26 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 82853 00:20:28.615 13:59:26 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 82853 ']' 00:20:28.615 13:59:26 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 82853 00:20:28.615 13:59:26 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:20:28.615 13:59:26 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:28.615 13:59:26 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82853 00:20:28.615 killing process with pid 82853 00:20:28.615 13:59:26 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:28.615 13:59:26 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:28.615 13:59:26 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82853' 00:20:28.615 13:59:26 nvmf_dif -- common/autotest_common.sh@973 -- # kill 82853 00:20:28.615 13:59:26 nvmf_dif -- common/autotest_common.sh@978 -- # wait 82853 00:20:28.615 13:59:26 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:20:28.615 13:59:26 nvmf_dif -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:28.615 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:28.615 Waiting for block devices as requested 00:20:28.615 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:28.615 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:28.615 13:59:27 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:28.615 13:59:27 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:28.615 13:59:27 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:20:28.615 13:59:27 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:20:28.615 13:59:27 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:28.615 13:59:27 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:20:28.615 13:59:27 nvmf_dif -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:28.615 13:59:27 nvmf_dif -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:28.615 13:59:27 nvmf_dif -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:28.615 13:59:27 nvmf_dif -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:28.615 13:59:27 nvmf_dif -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:28.615 13:59:27 nvmf_dif -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:28.615 13:59:27 nvmf_dif -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:28.615 13:59:27 nvmf_dif -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:28.615 13:59:27 nvmf_dif -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:28.615 13:59:27 nvmf_dif -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:28.615 13:59:27 nvmf_dif -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:28.615 13:59:27 nvmf_dif -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:28.615 13:59:27 nvmf_dif -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:28.615 13:59:27 nvmf_dif -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:28.615 13:59:27 nvmf_dif -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:28.615 13:59:27 nvmf_dif -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:28.615 13:59:27 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:28.615 13:59:27 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:28.615 13:59:27 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:28.615 13:59:27 nvmf_dif -- nvmf/common.sh@300 -- # return 0 00:20:28.615 ************************************ 00:20:28.615 END TEST nvmf_dif 00:20:28.615 ************************************ 00:20:28.615 00:20:28.615 real 0m59.710s 00:20:28.615 user 3m51.411s 00:20:28.615 sys 0m16.107s 00:20:28.615 13:59:27 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:28.615 13:59:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:28.615 13:59:27 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:20:28.615 13:59:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:28.615 13:59:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:28.615 13:59:27 -- common/autotest_common.sh@10 -- # set +x 00:20:28.615 ************************************ 00:20:28.615 START TEST nvmf_abort_qd_sizes 00:20:28.615 ************************************ 00:20:28.615 13:59:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:20:28.615 * Looking for test storage... 00:20:28.615 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:28.615 13:59:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:28.615 13:59:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version 00:20:28.615 13:59:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:28.615 13:59:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:28.615 13:59:27 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:28.615 13:59:27 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:28.615 13:59:27 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:28.615 13:59:27 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:20:28.615 13:59:27 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:20:28.615 13:59:27 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:20:28.615 13:59:27 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:20:28.615 13:59:27 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:20:28.615 13:59:27 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:20:28.615 13:59:27 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:20:28.615 13:59:27 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:28.615 13:59:27 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:20:28.615 13:59:27 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:20:28.615 13:59:27 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:28.615 13:59:27 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:28.615 13:59:27 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:20:28.615 13:59:27 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:20:28.615 13:59:27 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:28.615 13:59:27 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:20:28.615 13:59:27 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:20:28.615 13:59:27 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:20:28.615 13:59:27 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:20:28.615 13:59:27 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:28.615 13:59:27 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:20:28.615 13:59:27 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:20:28.615 13:59:27 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:28.615 13:59:27 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:28.615 13:59:27 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:20:28.615 13:59:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:28.615 13:59:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:28.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:28.615 --rc genhtml_branch_coverage=1 00:20:28.615 --rc genhtml_function_coverage=1 00:20:28.615 --rc genhtml_legend=1 00:20:28.615 --rc geninfo_all_blocks=1 00:20:28.615 --rc geninfo_unexecuted_blocks=1 00:20:28.615 00:20:28.615 ' 00:20:28.615 13:59:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:28.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:28.615 --rc genhtml_branch_coverage=1 00:20:28.615 --rc genhtml_function_coverage=1 00:20:28.615 --rc genhtml_legend=1 00:20:28.615 --rc geninfo_all_blocks=1 00:20:28.615 --rc geninfo_unexecuted_blocks=1 00:20:28.615 00:20:28.615 ' 00:20:28.616 13:59:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:28.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:28.616 --rc genhtml_branch_coverage=1 00:20:28.616 --rc genhtml_function_coverage=1 00:20:28.616 --rc genhtml_legend=1 00:20:28.616 --rc geninfo_all_blocks=1 00:20:28.616 --rc geninfo_unexecuted_blocks=1 00:20:28.616 00:20:28.616 ' 00:20:28.616 13:59:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:28.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:28.616 --rc genhtml_branch_coverage=1 00:20:28.616 --rc genhtml_function_coverage=1 00:20:28.616 --rc genhtml_legend=1 00:20:28.616 --rc geninfo_all_blocks=1 00:20:28.616 --rc geninfo_unexecuted_blocks=1 00:20:28.616 00:20:28.616 ' 00:20:28.616 13:59:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:28.616 13:59:27 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:20:28.616 13:59:27 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:28.616 13:59:27 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:28.616 13:59:27 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:28.616 13:59:27 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:28.616 13:59:27 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:28.616 13:59:27 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:28.616 13:59:27 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:28.616 13:59:27 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:28.616 13:59:27 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:28.616 13:59:27 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:28.616 13:59:27 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:20:28.616 13:59:27 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=cfa2def7-c8af-457f-82a0-b312efdea7f4 00:20:28.616 13:59:27 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:28.616 13:59:27 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:28.616 13:59:27 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:28.616 13:59:27 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:28.616 13:59:27 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:28.616 13:59:27 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:20:28.616 13:59:27 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:28.616 13:59:27 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:28.616 13:59:27 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:28.616 13:59:27 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.616 13:59:27 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.616 13:59:27 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.616 13:59:27 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:20:28.616 13:59:27 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.616 13:59:27 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:20:28.616 13:59:27 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:28.616 13:59:27 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:28.616 13:59:27 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:28.616 13:59:27 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:28.616 13:59:27 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:28.616 13:59:27 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:28.616 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:28.616 13:59:27 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:28.616 13:59:27 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:28.616 13:59:27 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:28.616 13:59:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:20:28.616 13:59:27 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:28.616 13:59:27 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:28.616 13:59:27 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:28.616 13:59:27 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:28.616 13:59:27 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:28.616 13:59:27 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:28.616 13:59:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:28.616 13:59:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:28.616 13:59:27 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:28.616 13:59:27 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:28.616 13:59:27 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:28.616 13:59:27 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:28.616 13:59:27 nvmf_abort_qd_sizes -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:28.616 13:59:27 nvmf_abort_qd_sizes -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:28.616 13:59:27 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:28.616 13:59:27 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:28.616 13:59:27 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:28.616 13:59:27 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:28.616 13:59:27 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:28.616 13:59:27 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:28.616 13:59:27 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:28.616 13:59:27 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:28.616 13:59:27 nvmf_abort_qd_sizes -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:28.616 13:59:27 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:28.616 13:59:27 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:28.616 13:59:27 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:28.616 13:59:27 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:28.616 13:59:27 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:28.616 13:59:27 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:28.616 13:59:27 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:28.616 13:59:27 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:28.616 Cannot find device "nvmf_init_br" 00:20:28.616 13:59:27 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:20:28.616 13:59:27 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:28.616 Cannot find device "nvmf_init_br2" 00:20:28.616 13:59:27 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:20:28.616 13:59:27 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:28.616 Cannot find device "nvmf_tgt_br" 00:20:28.616 13:59:27 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # true 00:20:28.616 13:59:27 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:28.616 Cannot find device "nvmf_tgt_br2" 00:20:28.616 13:59:28 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # true 00:20:28.616 13:59:28 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:28.875 Cannot find device "nvmf_init_br" 00:20:28.875 13:59:28 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # true 00:20:28.875 13:59:28 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:28.875 Cannot find device "nvmf_init_br2" 00:20:28.875 13:59:28 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # true 00:20:28.875 13:59:28 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:28.875 Cannot find device "nvmf_tgt_br" 00:20:28.875 13:59:28 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # true 00:20:28.875 13:59:28 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:28.875 Cannot find device "nvmf_tgt_br2" 00:20:28.875 13:59:28 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # true 00:20:28.875 13:59:28 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:28.875 Cannot find device "nvmf_br" 00:20:28.875 13:59:28 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # true 00:20:28.875 13:59:28 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:28.875 Cannot find device "nvmf_init_if" 00:20:28.875 13:59:28 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # true 00:20:28.875 13:59:28 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:28.875 Cannot find device "nvmf_init_if2" 00:20:28.875 13:59:28 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # true 00:20:28.875 13:59:28 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:28.875 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:28.875 13:59:28 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # true 00:20:28.875 13:59:28 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:28.875 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:28.875 13:59:28 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # true 00:20:28.875 13:59:28 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:28.875 13:59:28 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:28.875 13:59:28 nvmf_abort_qd_sizes -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:28.875 13:59:28 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:28.875 13:59:28 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:28.875 13:59:28 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:28.875 13:59:28 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:28.875 13:59:28 nvmf_abort_qd_sizes -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:28.875 13:59:28 nvmf_abort_qd_sizes -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:28.875 13:59:28 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:28.875 13:59:28 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:28.875 13:59:28 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:28.875 13:59:28 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:28.875 13:59:28 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:28.875 13:59:28 nvmf_abort_qd_sizes -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:28.875 13:59:28 nvmf_abort_qd_sizes -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:28.875 13:59:28 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:28.875 13:59:28 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:28.875 13:59:28 nvmf_abort_qd_sizes -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:28.875 13:59:28 nvmf_abort_qd_sizes -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:28.875 13:59:28 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:28.875 13:59:28 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:28.875 13:59:28 nvmf_abort_qd_sizes -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:28.875 13:59:28 nvmf_abort_qd_sizes -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:28.875 13:59:28 nvmf_abort_qd_sizes -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:29.134 13:59:28 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:29.134 13:59:28 nvmf_abort_qd_sizes -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:29.134 13:59:28 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:29.134 13:59:28 nvmf_abort_qd_sizes -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:29.134 13:59:28 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:29.134 13:59:28 nvmf_abort_qd_sizes -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:29.134 13:59:28 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:29.134 13:59:28 nvmf_abort_qd_sizes -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:29.134 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:29.134 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:20:29.134 00:20:29.134 --- 10.0.0.3 ping statistics --- 00:20:29.134 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:29.134 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:20:29.134 13:59:28 nvmf_abort_qd_sizes -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:29.134 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:29.134 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.076 ms 00:20:29.134 00:20:29.134 --- 10.0.0.4 ping statistics --- 00:20:29.134 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:29.134 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:20:29.134 13:59:28 nvmf_abort_qd_sizes -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:29.134 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:29.134 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:20:29.134 00:20:29.134 --- 10.0.0.1 ping statistics --- 00:20:29.134 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:29.134 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:20:29.134 13:59:28 nvmf_abort_qd_sizes -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:29.134 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:29.134 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:20:29.134 00:20:29.134 --- 10.0.0.2 ping statistics --- 00:20:29.134 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:29.134 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:20:29.134 13:59:28 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:29.134 13:59:28 nvmf_abort_qd_sizes -- nvmf/common.sh@461 -- # return 0 00:20:29.134 13:59:28 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:20:29.134 13:59:28 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:29.702 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:29.702 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:29.962 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:20:29.962 13:59:29 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:29.962 13:59:29 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:29.962 13:59:29 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:29.962 13:59:29 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:29.962 13:59:29 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:29.962 13:59:29 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:29.962 13:59:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:20:29.962 13:59:29 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:29.962 13:59:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:29.962 13:59:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:20:29.962 13:59:29 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=84269 00:20:29.962 13:59:29 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 84269 00:20:29.962 13:59:29 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:20:29.962 13:59:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 84269 ']' 00:20:29.962 13:59:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:29.962 13:59:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:29.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:29.962 13:59:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:29.962 13:59:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:29.962 13:59:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:20:29.962 [2024-12-06 13:59:29.333711] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:20:29.962 [2024-12-06 13:59:29.333808] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:30.221 [2024-12-06 13:59:29.489222] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:30.221 [2024-12-06 13:59:29.547900] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:30.221 [2024-12-06 13:59:29.548476] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:30.221 [2024-12-06 13:59:29.548783] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:30.221 [2024-12-06 13:59:29.549271] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:30.221 [2024-12-06 13:59:29.549509] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:30.221 [2024-12-06 13:59:29.551139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:30.221 [2024-12-06 13:59:29.551278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:30.221 [2024-12-06 13:59:29.551388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:30.221 [2024-12-06 13:59:29.551382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:30.221 [2024-12-06 13:59:29.616557] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:30.481 13:59:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:30.481 13:59:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:20:30.481 13:59:29 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:30.481 13:59:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:30.481 13:59:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:20:30.481 13:59:29 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:30.481 13:59:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:20:30.481 13:59:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:20:30.481 13:59:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:20:30.481 13:59:29 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:20:30.481 13:59:29 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:20:30.481 13:59:29 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n '' ]] 00:20:30.481 13:59:29 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:20:30.481 13:59:29 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:20:30.481 13:59:29 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # local bdf= 00:20:30.481 13:59:29 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:20:30.481 13:59:29 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # local class 00:20:30.481 13:59:29 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # local subclass 00:20:30.481 13:59:29 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # local progif 00:20:30.481 13:59:29 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # printf %02x 1 00:20:30.481 13:59:29 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # class=01 00:20:30.481 13:59:29 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # printf %02x 8 00:20:30.481 13:59:29 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # subclass=08 00:20:30.481 13:59:29 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # printf %02x 2 00:20:30.481 13:59:29 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # progif=02 00:20:30.481 13:59:29 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # hash lspci 00:20:30.481 13:59:29 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:20:30.481 13:59:29 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # lspci -mm -n -D 00:20:30.481 13:59:29 nvmf_abort_qd_sizes -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:20:30.481 13:59:29 nvmf_abort_qd_sizes -- scripts/common.sh@243 -- # grep -i -- -p02 00:20:30.481 13:59:29 nvmf_abort_qd_sizes -- scripts/common.sh@245 -- # tr -d '"' 00:20:30.481 13:59:29 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:20:30.481 13:59:29 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:20:30.481 13:59:29 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:20:30.481 13:59:29 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:20:30.481 13:59:29 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:20:30.481 13:59:29 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:20:30.481 13:59:29 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:20:30.481 13:59:29 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:20:30.481 13:59:29 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:20:30.481 13:59:29 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:20:30.481 13:59:29 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:20:30.481 13:59:29 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:20:30.481 13:59:29 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:20:30.481 13:59:29 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:20:30.481 13:59:29 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:20:30.481 13:59:29 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:20:30.481 13:59:29 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:20:30.481 13:59:29 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:20:30.481 13:59:29 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:20:30.481 13:59:29 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:20:30.481 13:59:29 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:20:30.481 13:59:29 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:20:30.481 13:59:29 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:20:30.481 13:59:29 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:20:30.481 13:59:29 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 2 )) 00:20:30.481 13:59:29 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:20:30.481 13:59:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:20:30.481 13:59:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:20:30.481 13:59:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:20:30.481 13:59:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:30.481 13:59:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:30.481 13:59:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:20:30.481 ************************************ 00:20:30.481 START TEST spdk_target_abort 00:20:30.481 ************************************ 00:20:30.481 13:59:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:20:30.481 13:59:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:20:30.481 13:59:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:20:30.481 13:59:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.481 13:59:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:30.481 spdk_targetn1 00:20:30.481 13:59:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.481 13:59:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:30.481 13:59:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.481 13:59:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:30.481 [2024-12-06 13:59:29.866496] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:30.740 13:59:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.740 13:59:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:20:30.740 13:59:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.741 13:59:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:30.741 13:59:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.741 13:59:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:20:30.741 13:59:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.741 13:59:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:30.741 13:59:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.741 13:59:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.3 -s 4420 00:20:30.741 13:59:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.741 13:59:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:30.741 [2024-12-06 13:59:29.903596] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:30.741 13:59:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.741 13:59:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.3 4420 nqn.2016-06.io.spdk:testnqn 00:20:30.741 13:59:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:20:30.741 13:59:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:20:30.741 13:59:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.3 00:20:30.741 13:59:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:20:30.741 13:59:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:20:30.741 13:59:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:20:30.741 13:59:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:20:30.741 13:59:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:20:30.741 13:59:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:30.741 13:59:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:20:30.741 13:59:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:30.741 13:59:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:20:30.741 13:59:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:30.741 13:59:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3' 00:20:30.741 13:59:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:30.741 13:59:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:20:30.741 13:59:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:30.741 13:59:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:30.741 13:59:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:30.741 13:59:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:34.033 Initializing NVMe Controllers 00:20:34.033 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:20:34.033 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:20:34.033 Initialization complete. Launching workers. 00:20:34.034 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9555, failed: 0 00:20:34.034 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1098, failed to submit 8457 00:20:34.034 success 903, unsuccessful 195, failed 0 00:20:34.034 13:59:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:34.034 13:59:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:37.320 Initializing NVMe Controllers 00:20:37.320 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:20:37.320 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:20:37.320 Initialization complete. Launching workers. 00:20:37.320 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9012, failed: 0 00:20:37.320 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1196, failed to submit 7816 00:20:37.320 success 338, unsuccessful 858, failed 0 00:20:37.320 13:59:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:37.320 13:59:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:40.606 Initializing NVMe Controllers 00:20:40.606 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:20:40.606 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:20:40.606 Initialization complete. Launching workers. 00:20:40.606 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 30825, failed: 0 00:20:40.606 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2418, failed to submit 28407 00:20:40.606 success 524, unsuccessful 1894, failed 0 00:20:40.606 13:59:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:20:40.606 13:59:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.606 13:59:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:40.606 13:59:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.606 13:59:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:20:40.606 13:59:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.606 13:59:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:41.174 13:59:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.174 13:59:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 84269 00:20:41.174 13:59:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 84269 ']' 00:20:41.174 13:59:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 84269 00:20:41.174 13:59:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:20:41.174 13:59:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:41.174 13:59:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84269 00:20:41.174 killing process with pid 84269 00:20:41.174 13:59:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:41.174 13:59:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:41.175 13:59:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84269' 00:20:41.175 13:59:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 84269 00:20:41.175 13:59:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 84269 00:20:41.175 ************************************ 00:20:41.175 END TEST spdk_target_abort 00:20:41.175 ************************************ 00:20:41.175 00:20:41.175 real 0m10.754s 00:20:41.175 user 0m41.290s 00:20:41.175 sys 0m2.112s 00:20:41.175 13:59:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:41.175 13:59:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:41.433 13:59:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:20:41.433 13:59:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:41.433 13:59:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:41.433 13:59:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:20:41.433 ************************************ 00:20:41.433 START TEST kernel_target_abort 00:20:41.433 ************************************ 00:20:41.433 13:59:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:20:41.433 13:59:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:20:41.433 13:59:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:20:41.433 13:59:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:41.433 13:59:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:41.433 13:59:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:41.433 13:59:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:41.433 13:59:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:41.433 13:59:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:41.433 13:59:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:41.433 13:59:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:41.433 13:59:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:41.433 13:59:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:20:41.433 13:59:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:20:41.433 13:59:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:20:41.433 13:59:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:41.433 13:59:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:41.433 13:59:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:20:41.433 13:59:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:20:41.433 13:59:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:20:41.433 13:59:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:20:41.433 13:59:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:20:41.433 13:59:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:41.692 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:41.692 Waiting for block devices as requested 00:20:41.692 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:41.950 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:41.950 13:59:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:20:41.950 13:59:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:20:41.950 13:59:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:20:41.950 13:59:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:20:41.950 13:59:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:20:41.950 13:59:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:41.950 13:59:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:20:41.951 13:59:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:20:41.951 13:59:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:20:41.951 No valid GPT data, bailing 00:20:41.951 13:59:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:20:41.951 13:59:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:20:41.951 13:59:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:20:41.951 13:59:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:20:41.951 13:59:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:20:41.951 13:59:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:20:41.951 13:59:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:20:41.951 13:59:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:20:41.951 13:59:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:20:41.951 13:59:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:41.951 13:59:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:20:41.951 13:59:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:20:41.951 13:59:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:20:41.951 No valid GPT data, bailing 00:20:41.951 13:59:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:20:41.951 13:59:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:20:41.951 13:59:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:20:41.951 13:59:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:20:41.951 13:59:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:20:41.951 13:59:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:20:41.951 13:59:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:20:41.951 13:59:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:20:41.951 13:59:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:20:41.951 13:59:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:41.951 13:59:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:20:41.951 13:59:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:20:41.951 13:59:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:20:42.209 No valid GPT data, bailing 00:20:42.209 13:59:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:20:42.209 13:59:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:20:42.209 13:59:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:20:42.209 13:59:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:20:42.209 13:59:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:20:42.209 13:59:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:20:42.209 13:59:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:20:42.209 13:59:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:20:42.209 13:59:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:20:42.209 13:59:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:42.209 13:59:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:20:42.209 13:59:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:20:42.209 13:59:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:20:42.209 No valid GPT data, bailing 00:20:42.209 13:59:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:20:42.209 13:59:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:20:42.209 13:59:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:20:42.209 13:59:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:20:42.209 13:59:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:20:42.209 13:59:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:42.209 13:59:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:42.209 13:59:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:20:42.209 13:59:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:20:42.209 13:59:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:20:42.209 13:59:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:20:42.209 13:59:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:20:42.209 13:59:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:20:42.209 13:59:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:20:42.209 13:59:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:20:42.209 13:59:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:20:42.209 13:59:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:20:42.209 13:59:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 --hostid=cfa2def7-c8af-457f-82a0-b312efdea7f4 -a 10.0.0.1 -t tcp -s 4420 00:20:42.209 00:20:42.209 Discovery Log Number of Records 2, Generation counter 2 00:20:42.209 =====Discovery Log Entry 0====== 00:20:42.209 trtype: tcp 00:20:42.209 adrfam: ipv4 00:20:42.210 subtype: current discovery subsystem 00:20:42.210 treq: not specified, sq flow control disable supported 00:20:42.210 portid: 1 00:20:42.210 trsvcid: 4420 00:20:42.210 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:20:42.210 traddr: 10.0.0.1 00:20:42.210 eflags: none 00:20:42.210 sectype: none 00:20:42.210 =====Discovery Log Entry 1====== 00:20:42.210 trtype: tcp 00:20:42.210 adrfam: ipv4 00:20:42.210 subtype: nvme subsystem 00:20:42.210 treq: not specified, sq flow control disable supported 00:20:42.210 portid: 1 00:20:42.210 trsvcid: 4420 00:20:42.210 subnqn: nqn.2016-06.io.spdk:testnqn 00:20:42.210 traddr: 10.0.0.1 00:20:42.210 eflags: none 00:20:42.210 sectype: none 00:20:42.210 13:59:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:20:42.210 13:59:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:20:42.210 13:59:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:20:42.210 13:59:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:20:42.210 13:59:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:20:42.210 13:59:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:20:42.210 13:59:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:20:42.210 13:59:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:20:42.210 13:59:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:20:42.210 13:59:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:42.210 13:59:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:20:42.210 13:59:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:42.210 13:59:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:20:42.210 13:59:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:42.210 13:59:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:20:42.210 13:59:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:42.210 13:59:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:20:42.210 13:59:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:42.210 13:59:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:42.210 13:59:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:42.210 13:59:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:45.619 Initializing NVMe Controllers 00:20:45.619 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:20:45.619 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:20:45.619 Initialization complete. Launching workers. 00:20:45.619 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 32552, failed: 0 00:20:45.619 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 32552, failed to submit 0 00:20:45.619 success 0, unsuccessful 32552, failed 0 00:20:45.619 13:59:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:45.619 13:59:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:48.903 Initializing NVMe Controllers 00:20:48.903 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:20:48.903 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:20:48.903 Initialization complete. Launching workers. 00:20:48.903 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 64477, failed: 0 00:20:48.903 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 26894, failed to submit 37583 00:20:48.903 success 0, unsuccessful 26894, failed 0 00:20:48.903 13:59:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:48.903 13:59:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:52.196 Initializing NVMe Controllers 00:20:52.196 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:20:52.196 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:20:52.196 Initialization complete. Launching workers. 00:20:52.196 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 97612, failed: 0 00:20:52.196 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 24398, failed to submit 73214 00:20:52.196 success 0, unsuccessful 24398, failed 0 00:20:52.196 13:59:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:20:52.196 13:59:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:20:52.196 13:59:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:20:52.196 13:59:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:52.196 13:59:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:52.196 13:59:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:20:52.196 13:59:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:52.196 13:59:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:20:52.196 13:59:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:20:52.196 13:59:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:52.454 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:54.982 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:55.241 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:20:55.241 ************************************ 00:20:55.241 END TEST kernel_target_abort 00:20:55.241 ************************************ 00:20:55.241 00:20:55.241 real 0m13.877s 00:20:55.241 user 0m5.773s 00:20:55.241 sys 0m5.438s 00:20:55.241 13:59:54 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:55.241 13:59:54 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:55.241 13:59:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:20:55.241 13:59:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:20:55.241 13:59:54 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:55.241 13:59:54 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:20:55.241 13:59:54 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:55.241 13:59:54 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:20:55.241 13:59:54 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:55.241 13:59:54 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:55.241 rmmod nvme_tcp 00:20:55.241 rmmod nvme_fabrics 00:20:55.241 rmmod nvme_keyring 00:20:55.241 13:59:54 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:55.241 13:59:54 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:20:55.241 13:59:54 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:20:55.241 13:59:54 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 84269 ']' 00:20:55.241 13:59:54 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 84269 00:20:55.241 13:59:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 84269 ']' 00:20:55.241 13:59:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 84269 00:20:55.241 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (84269) - No such process 00:20:55.241 Process with pid 84269 is not found 00:20:55.241 13:59:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 84269 is not found' 00:20:55.241 13:59:54 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:20:55.241 13:59:54 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:55.809 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:55.809 Waiting for block devices as requested 00:20:55.809 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:55.809 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:56.067 13:59:55 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:56.067 13:59:55 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:56.067 13:59:55 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:20:56.067 13:59:55 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:20:56.067 13:59:55 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:20:56.067 13:59:55 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:56.067 13:59:55 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:56.067 13:59:55 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:56.067 13:59:55 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:56.067 13:59:55 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:56.067 13:59:55 nvmf_abort_qd_sizes -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:56.067 13:59:55 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:56.067 13:59:55 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:56.067 13:59:55 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:56.067 13:59:55 nvmf_abort_qd_sizes -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:56.067 13:59:55 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:56.067 13:59:55 nvmf_abort_qd_sizes -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:56.067 13:59:55 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:56.067 13:59:55 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:56.067 13:59:55 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:56.067 13:59:55 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:56.325 13:59:55 nvmf_abort_qd_sizes -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:56.325 13:59:55 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:56.325 13:59:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:56.325 13:59:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:56.325 13:59:55 nvmf_abort_qd_sizes -- nvmf/common.sh@300 -- # return 0 00:20:56.325 00:20:56.325 real 0m27.772s 00:20:56.325 user 0m48.192s 00:20:56.325 sys 0m9.086s 00:20:56.325 ************************************ 00:20:56.325 END TEST nvmf_abort_qd_sizes 00:20:56.325 ************************************ 00:20:56.325 13:59:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:56.325 13:59:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:20:56.325 13:59:55 -- spdk/autotest.sh@292 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:20:56.325 13:59:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:56.325 13:59:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:56.325 13:59:55 -- common/autotest_common.sh@10 -- # set +x 00:20:56.325 ************************************ 00:20:56.325 START TEST keyring_file 00:20:56.325 ************************************ 00:20:56.325 13:59:55 keyring_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:20:56.325 * Looking for test storage... 00:20:56.325 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:20:56.325 13:59:55 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:56.325 13:59:55 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version 00:20:56.325 13:59:55 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:56.583 13:59:55 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:56.584 13:59:55 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:56.584 13:59:55 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:56.584 13:59:55 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:56.584 13:59:55 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:20:56.584 13:59:55 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:20:56.584 13:59:55 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:20:56.584 13:59:55 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:20:56.584 13:59:55 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:20:56.584 13:59:55 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:20:56.584 13:59:55 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:20:56.584 13:59:55 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:56.584 13:59:55 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:20:56.584 13:59:55 keyring_file -- scripts/common.sh@345 -- # : 1 00:20:56.584 13:59:55 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:56.584 13:59:55 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:56.584 13:59:55 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:20:56.584 13:59:55 keyring_file -- scripts/common.sh@353 -- # local d=1 00:20:56.584 13:59:55 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:56.584 13:59:55 keyring_file -- scripts/common.sh@355 -- # echo 1 00:20:56.584 13:59:55 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:20:56.584 13:59:55 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:20:56.584 13:59:55 keyring_file -- scripts/common.sh@353 -- # local d=2 00:20:56.584 13:59:55 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:56.584 13:59:55 keyring_file -- scripts/common.sh@355 -- # echo 2 00:20:56.584 13:59:55 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:20:56.584 13:59:55 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:56.584 13:59:55 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:56.584 13:59:55 keyring_file -- scripts/common.sh@368 -- # return 0 00:20:56.584 13:59:55 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:56.584 13:59:55 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:56.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:56.584 --rc genhtml_branch_coverage=1 00:20:56.584 --rc genhtml_function_coverage=1 00:20:56.584 --rc genhtml_legend=1 00:20:56.584 --rc geninfo_all_blocks=1 00:20:56.584 --rc geninfo_unexecuted_blocks=1 00:20:56.584 00:20:56.584 ' 00:20:56.584 13:59:55 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:56.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:56.584 --rc genhtml_branch_coverage=1 00:20:56.584 --rc genhtml_function_coverage=1 00:20:56.584 --rc genhtml_legend=1 00:20:56.584 --rc geninfo_all_blocks=1 00:20:56.584 --rc geninfo_unexecuted_blocks=1 00:20:56.584 00:20:56.584 ' 00:20:56.584 13:59:55 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:56.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:56.584 --rc genhtml_branch_coverage=1 00:20:56.584 --rc genhtml_function_coverage=1 00:20:56.584 --rc genhtml_legend=1 00:20:56.584 --rc geninfo_all_blocks=1 00:20:56.584 --rc geninfo_unexecuted_blocks=1 00:20:56.584 00:20:56.584 ' 00:20:56.584 13:59:55 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:56.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:56.584 --rc genhtml_branch_coverage=1 00:20:56.584 --rc genhtml_function_coverage=1 00:20:56.584 --rc genhtml_legend=1 00:20:56.584 --rc geninfo_all_blocks=1 00:20:56.584 --rc geninfo_unexecuted_blocks=1 00:20:56.584 00:20:56.584 ' 00:20:56.584 13:59:55 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:20:56.584 13:59:55 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:56.584 13:59:55 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:20:56.584 13:59:55 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:56.584 13:59:55 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:56.584 13:59:55 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:56.584 13:59:55 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:56.584 13:59:55 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:56.584 13:59:55 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:56.584 13:59:55 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:56.584 13:59:55 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:56.584 13:59:55 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:56.584 13:59:55 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:56.584 13:59:55 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:20:56.584 13:59:55 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=cfa2def7-c8af-457f-82a0-b312efdea7f4 00:20:56.584 13:59:55 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:56.584 13:59:55 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:56.584 13:59:55 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:56.584 13:59:55 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:56.584 13:59:55 keyring_file -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:56.584 13:59:55 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:20:56.584 13:59:55 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:56.584 13:59:55 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:56.584 13:59:55 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:56.584 13:59:55 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.584 13:59:55 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.584 13:59:55 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.584 13:59:55 keyring_file -- paths/export.sh@5 -- # export PATH 00:20:56.584 13:59:55 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.584 13:59:55 keyring_file -- nvmf/common.sh@51 -- # : 0 00:20:56.584 13:59:55 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:56.584 13:59:55 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:56.584 13:59:55 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:56.584 13:59:55 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:56.584 13:59:55 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:56.584 13:59:55 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:56.584 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:56.584 13:59:55 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:56.584 13:59:55 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:56.584 13:59:55 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:56.584 13:59:55 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:20:56.584 13:59:55 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:20:56.584 13:59:55 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:20:56.584 13:59:55 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:20:56.584 13:59:55 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:20:56.584 13:59:55 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:20:56.584 13:59:55 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:20:56.584 13:59:55 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:20:56.584 13:59:55 keyring_file -- keyring/common.sh@17 -- # name=key0 00:20:56.584 13:59:55 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:20:56.584 13:59:55 keyring_file -- keyring/common.sh@17 -- # digest=0 00:20:56.584 13:59:55 keyring_file -- keyring/common.sh@18 -- # mktemp 00:20:56.584 13:59:55 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.t8f7CknHFy 00:20:56.584 13:59:55 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:20:56.584 13:59:55 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:20:56.584 13:59:55 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:20:56.584 13:59:55 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:20:56.584 13:59:55 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:20:56.584 13:59:55 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:20:56.584 13:59:55 keyring_file -- nvmf/common.sh@733 -- # python - 00:20:56.584 13:59:55 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.t8f7CknHFy 00:20:56.584 13:59:55 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.t8f7CknHFy 00:20:56.584 13:59:55 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.t8f7CknHFy 00:20:56.584 13:59:55 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:20:56.584 13:59:55 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:20:56.585 13:59:55 keyring_file -- keyring/common.sh@17 -- # name=key1 00:20:56.585 13:59:55 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:20:56.585 13:59:55 keyring_file -- keyring/common.sh@17 -- # digest=0 00:20:56.585 13:59:55 keyring_file -- keyring/common.sh@18 -- # mktemp 00:20:56.585 13:59:55 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.jHufBM960S 00:20:56.585 13:59:55 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:20:56.585 13:59:55 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:20:56.585 13:59:55 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:20:56.585 13:59:55 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:20:56.585 13:59:55 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:20:56.585 13:59:55 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:20:56.585 13:59:55 keyring_file -- nvmf/common.sh@733 -- # python - 00:20:56.585 13:59:55 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.jHufBM960S 00:20:56.585 13:59:55 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.jHufBM960S 00:20:56.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:56.585 13:59:55 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.jHufBM960S 00:20:56.585 13:59:55 keyring_file -- keyring/file.sh@30 -- # tgtpid=85178 00:20:56.585 13:59:55 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:56.585 13:59:55 keyring_file -- keyring/file.sh@32 -- # waitforlisten 85178 00:20:56.585 13:59:55 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 85178 ']' 00:20:56.585 13:59:55 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:56.585 13:59:55 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:56.585 13:59:55 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:56.585 13:59:55 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:56.585 13:59:55 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:20:56.585 [2024-12-06 13:59:55.983273] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:20:56.585 [2024-12-06 13:59:55.983564] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85178 ] 00:20:56.843 [2024-12-06 13:59:56.134231] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:56.843 [2024-12-06 13:59:56.197532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:57.101 [2024-12-06 13:59:56.300190] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:57.360 13:59:56 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:57.360 13:59:56 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:20:57.360 13:59:56 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:20:57.360 13:59:56 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.360 13:59:56 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:20:57.360 [2024-12-06 13:59:56.553960] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:57.360 null0 00:20:57.360 [2024-12-06 13:59:56.585936] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:57.360 [2024-12-06 13:59:56.586144] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:20:57.360 13:59:56 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.360 13:59:56 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:20:57.360 13:59:56 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:20:57.360 13:59:56 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:20:57.360 13:59:56 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:57.360 13:59:56 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:57.360 13:59:56 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:57.360 13:59:56 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:57.360 13:59:56 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:20:57.360 13:59:56 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.360 13:59:56 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:20:57.360 [2024-12-06 13:59:56.613919] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:20:57.360 request: 00:20:57.360 { 00:20:57.360 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:20:57.360 "secure_channel": false, 00:20:57.360 "listen_address": { 00:20:57.360 "trtype": "tcp", 00:20:57.360 "traddr": "127.0.0.1", 00:20:57.360 "trsvcid": "4420" 00:20:57.360 }, 00:20:57.360 "method": "nvmf_subsystem_add_listener", 00:20:57.360 "req_id": 1 00:20:57.360 } 00:20:57.360 Got JSON-RPC error response 00:20:57.360 response: 00:20:57.360 { 00:20:57.360 "code": -32602, 00:20:57.360 "message": "Invalid parameters" 00:20:57.360 } 00:20:57.360 13:59:56 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:57.360 13:59:56 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:20:57.360 13:59:56 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:57.360 13:59:56 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:57.360 13:59:56 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:57.360 13:59:56 keyring_file -- keyring/file.sh@47 -- # bperfpid=85188 00:20:57.360 13:59:56 keyring_file -- keyring/file.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:20:57.360 13:59:56 keyring_file -- keyring/file.sh@49 -- # waitforlisten 85188 /var/tmp/bperf.sock 00:20:57.360 13:59:56 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 85188 ']' 00:20:57.360 13:59:56 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:57.360 13:59:56 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:57.360 13:59:56 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:57.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:57.360 13:59:56 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:57.360 13:59:56 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:20:57.360 [2024-12-06 13:59:56.680874] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:20:57.360 [2024-12-06 13:59:56.681124] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85188 ] 00:20:57.619 [2024-12-06 13:59:56.832246] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:57.619 [2024-12-06 13:59:56.891028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:57.619 [2024-12-06 13:59:56.948106] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:57.619 13:59:57 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:57.619 13:59:57 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:20:57.619 13:59:57 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.t8f7CknHFy 00:20:57.619 13:59:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.t8f7CknHFy 00:20:58.187 13:59:57 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.jHufBM960S 00:20:58.187 13:59:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.jHufBM960S 00:20:58.187 13:59:57 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:20:58.187 13:59:57 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:20:58.187 13:59:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:58.187 13:59:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:58.187 13:59:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:20:58.447 13:59:57 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.t8f7CknHFy == \/\t\m\p\/\t\m\p\.\t\8\f\7\C\k\n\H\F\y ]] 00:20:58.447 13:59:57 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:20:58.447 13:59:57 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:20:58.447 13:59:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:58.447 13:59:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:58.447 13:59:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:20:58.706 13:59:58 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.jHufBM960S == \/\t\m\p\/\t\m\p\.\j\H\u\f\B\M\9\6\0\S ]] 00:20:58.706 13:59:58 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:20:58.706 13:59:58 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:20:58.706 13:59:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:58.707 13:59:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:20:58.707 13:59:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:58.707 13:59:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:59.050 13:59:58 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:20:59.050 13:59:58 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:20:59.050 13:59:58 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:20:59.050 13:59:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:59.050 13:59:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:20:59.050 13:59:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:59.050 13:59:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:59.317 13:59:58 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:20:59.317 13:59:58 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:20:59.317 13:59:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:20:59.317 [2024-12-06 13:59:58.667045] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:59.576 nvme0n1 00:20:59.576 13:59:58 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:20:59.576 13:59:58 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:20:59.576 13:59:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:59.576 13:59:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:59.576 13:59:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:59.576 13:59:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:20:59.835 13:59:59 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:20:59.835 13:59:59 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:20:59.835 13:59:59 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:20:59.835 13:59:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:59.835 13:59:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:20:59.835 13:59:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:59.835 13:59:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:00.094 13:59:59 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:21:00.094 13:59:59 keyring_file -- keyring/file.sh@63 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:00.094 Running I/O for 1 seconds... 00:21:01.028 13883.00 IOPS, 54.23 MiB/s 00:21:01.028 Latency(us) 00:21:01.028 [2024-12-06T14:00:00.432Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:01.028 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:21:01.028 nvme0n1 : 1.01 13915.19 54.36 0.00 0.00 9178.23 5332.25 22163.08 00:21:01.028 [2024-12-06T14:00:00.432Z] =================================================================================================================== 00:21:01.028 [2024-12-06T14:00:00.432Z] Total : 13915.19 54.36 0.00 0.00 9178.23 5332.25 22163.08 00:21:01.028 { 00:21:01.028 "results": [ 00:21:01.028 { 00:21:01.028 "job": "nvme0n1", 00:21:01.028 "core_mask": "0x2", 00:21:01.028 "workload": "randrw", 00:21:01.028 "percentage": 50, 00:21:01.028 "status": "finished", 00:21:01.028 "queue_depth": 128, 00:21:01.028 "io_size": 4096, 00:21:01.028 "runtime": 1.007029, 00:21:01.029 "iops": 13915.190128586168, 00:21:01.029 "mibps": 54.35621143978972, 00:21:01.029 "io_failed": 0, 00:21:01.029 "io_timeout": 0, 00:21:01.029 "avg_latency_us": 9178.232978987045, 00:21:01.029 "min_latency_us": 5332.2472727272725, 00:21:01.029 "max_latency_us": 22163.083636363637 00:21:01.029 } 00:21:01.029 ], 00:21:01.029 "core_count": 1 00:21:01.029 } 00:21:01.029 14:00:00 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:21:01.029 14:00:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:21:01.287 14:00:00 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:21:01.287 14:00:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:01.287 14:00:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:01.287 14:00:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:01.287 14:00:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:01.287 14:00:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:01.546 14:00:00 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:21:01.546 14:00:00 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:21:01.546 14:00:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:01.546 14:00:00 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:01.546 14:00:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:01.546 14:00:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:01.546 14:00:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:01.805 14:00:01 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:21:01.805 14:00:01 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:01.805 14:00:01 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:21:01.805 14:00:01 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:01.805 14:00:01 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:21:01.805 14:00:01 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:01.805 14:00:01 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:21:01.805 14:00:01 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:01.805 14:00:01 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:01.805 14:00:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:02.064 [2024-12-06 14:00:01.291457] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:02.064 [2024-12-06 14:00:01.292163] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1105ce0 (107): Transport endpoint is not connected 00:21:02.064 [2024-12-06 14:00:01.293152] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1105ce0 (9): Bad file descriptor 00:21:02.064 [2024-12-06 14:00:01.294149] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:21:02.064 request: 00:21:02.064 { 00:21:02.064 "name": "nvme0", 00:21:02.064 "trtype": "tcp", 00:21:02.064 "traddr": "127.0.0.1", 00:21:02.064 "adrfam": "ipv4", 00:21:02.064 "trsvcid": "4420", 00:21:02.064 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:02.064 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:02.064 "prchk_reftag": false, 00:21:02.064 "prchk_guard": false, 00:21:02.064 "hdgst": false, 00:21:02.064 "ddgst": false, 00:21:02.064 "psk": "key1", 00:21:02.064 "allow_unrecognized_csi": false, 00:21:02.064 "method": "bdev_nvme_attach_controller", 00:21:02.064 "req_id": 1 00:21:02.064 } 00:21:02.064 Got JSON-RPC error response 00:21:02.064 response: 00:21:02.064 { 00:21:02.064 "code": -5, 00:21:02.064 "message": "Input/output error" 00:21:02.064 } 00:21:02.064 [2024-12-06 14:00:01.294306] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:21:02.064 [2024-12-06 14:00:01.294331] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:21:02.064 [2024-12-06 14:00:01.294343] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:21:02.064 14:00:01 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:21:02.064 14:00:01 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:02.064 14:00:01 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:02.064 14:00:01 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:02.064 14:00:01 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:21:02.064 14:00:01 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:02.064 14:00:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:02.064 14:00:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:02.064 14:00:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:02.064 14:00:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:02.322 14:00:01 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:21:02.322 14:00:01 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:21:02.322 14:00:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:02.322 14:00:01 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:02.322 14:00:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:02.322 14:00:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:02.322 14:00:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:02.580 14:00:01 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:21:02.580 14:00:01 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:21:02.580 14:00:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:21:02.839 14:00:02 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:21:02.839 14:00:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:21:03.096 14:00:02 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:21:03.096 14:00:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:03.096 14:00:02 keyring_file -- keyring/file.sh@78 -- # jq length 00:21:03.356 14:00:02 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:21:03.356 14:00:02 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.t8f7CknHFy 00:21:03.356 14:00:02 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.t8f7CknHFy 00:21:03.356 14:00:02 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:21:03.356 14:00:02 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.t8f7CknHFy 00:21:03.356 14:00:02 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:21:03.356 14:00:02 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:03.356 14:00:02 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:21:03.356 14:00:02 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:03.356 14:00:02 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.t8f7CknHFy 00:21:03.356 14:00:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.t8f7CknHFy 00:21:03.614 [2024-12-06 14:00:02.800620] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.t8f7CknHFy': 0100660 00:21:03.614 [2024-12-06 14:00:02.800654] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:21:03.614 request: 00:21:03.614 { 00:21:03.614 "name": "key0", 00:21:03.614 "path": "/tmp/tmp.t8f7CknHFy", 00:21:03.614 "method": "keyring_file_add_key", 00:21:03.614 "req_id": 1 00:21:03.614 } 00:21:03.614 Got JSON-RPC error response 00:21:03.614 response: 00:21:03.614 { 00:21:03.614 "code": -1, 00:21:03.614 "message": "Operation not permitted" 00:21:03.614 } 00:21:03.614 14:00:02 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:21:03.614 14:00:02 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:03.614 14:00:02 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:03.614 14:00:02 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:03.614 14:00:02 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.t8f7CknHFy 00:21:03.614 14:00:02 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.t8f7CknHFy 00:21:03.614 14:00:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.t8f7CknHFy 00:21:03.873 14:00:03 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.t8f7CknHFy 00:21:03.873 14:00:03 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:21:03.873 14:00:03 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:03.873 14:00:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:03.873 14:00:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:03.873 14:00:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:03.873 14:00:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:04.132 14:00:03 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:21:04.132 14:00:03 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:04.132 14:00:03 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:21:04.132 14:00:03 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:04.132 14:00:03 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:21:04.132 14:00:03 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:04.132 14:00:03 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:21:04.132 14:00:03 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:04.132 14:00:03 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:04.132 14:00:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:04.391 [2024-12-06 14:00:03.620791] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.t8f7CknHFy': No such file or directory 00:21:04.391 [2024-12-06 14:00:03.620828] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:21:04.391 [2024-12-06 14:00:03.620846] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:21:04.391 [2024-12-06 14:00:03.620854] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:21:04.391 [2024-12-06 14:00:03.620862] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:04.391 [2024-12-06 14:00:03.620869] bdev_nvme.c:6796:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:21:04.391 request: 00:21:04.391 { 00:21:04.391 "name": "nvme0", 00:21:04.391 "trtype": "tcp", 00:21:04.391 "traddr": "127.0.0.1", 00:21:04.391 "adrfam": "ipv4", 00:21:04.391 "trsvcid": "4420", 00:21:04.391 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:04.391 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:04.391 "prchk_reftag": false, 00:21:04.391 "prchk_guard": false, 00:21:04.391 "hdgst": false, 00:21:04.391 "ddgst": false, 00:21:04.391 "psk": "key0", 00:21:04.391 "allow_unrecognized_csi": false, 00:21:04.391 "method": "bdev_nvme_attach_controller", 00:21:04.391 "req_id": 1 00:21:04.391 } 00:21:04.391 Got JSON-RPC error response 00:21:04.391 response: 00:21:04.391 { 00:21:04.391 "code": -19, 00:21:04.391 "message": "No such device" 00:21:04.391 } 00:21:04.391 14:00:03 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:21:04.391 14:00:03 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:04.391 14:00:03 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:04.391 14:00:03 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:04.391 14:00:03 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:21:04.391 14:00:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:21:04.651 14:00:03 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:21:04.651 14:00:03 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:21:04.651 14:00:03 keyring_file -- keyring/common.sh@17 -- # name=key0 00:21:04.651 14:00:03 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:21:04.651 14:00:03 keyring_file -- keyring/common.sh@17 -- # digest=0 00:21:04.651 14:00:03 keyring_file -- keyring/common.sh@18 -- # mktemp 00:21:04.651 14:00:03 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Ys0t5xlvZ4 00:21:04.651 14:00:03 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:21:04.651 14:00:03 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:21:04.651 14:00:03 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:21:04.651 14:00:03 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:21:04.651 14:00:03 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:21:04.651 14:00:03 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:21:04.651 14:00:03 keyring_file -- nvmf/common.sh@733 -- # python - 00:21:04.651 14:00:03 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Ys0t5xlvZ4 00:21:04.651 14:00:03 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Ys0t5xlvZ4 00:21:04.651 14:00:03 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.Ys0t5xlvZ4 00:21:04.651 14:00:03 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Ys0t5xlvZ4 00:21:04.651 14:00:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Ys0t5xlvZ4 00:21:04.911 14:00:04 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:04.911 14:00:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:05.171 nvme0n1 00:21:05.171 14:00:04 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:21:05.171 14:00:04 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:05.171 14:00:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:05.171 14:00:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:05.171 14:00:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:05.171 14:00:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:05.430 14:00:04 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:21:05.430 14:00:04 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:21:05.430 14:00:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:21:05.430 14:00:04 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:21:05.430 14:00:04 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:21:05.430 14:00:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:05.430 14:00:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:05.430 14:00:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:05.690 14:00:05 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:21:05.947 14:00:05 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:21:05.947 14:00:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:05.947 14:00:05 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:05.947 14:00:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:05.947 14:00:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:05.947 14:00:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:05.947 14:00:05 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:21:05.947 14:00:05 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:21:05.947 14:00:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:21:06.205 14:00:05 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:21:06.205 14:00:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:06.205 14:00:05 keyring_file -- keyring/file.sh@105 -- # jq length 00:21:06.476 14:00:05 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:21:06.476 14:00:05 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Ys0t5xlvZ4 00:21:06.476 14:00:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Ys0t5xlvZ4 00:21:06.736 14:00:06 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.jHufBM960S 00:21:06.736 14:00:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.jHufBM960S 00:21:06.995 14:00:06 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:06.995 14:00:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:07.255 nvme0n1 00:21:07.255 14:00:06 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:21:07.255 14:00:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:21:07.515 14:00:06 keyring_file -- keyring/file.sh@113 -- # config='{ 00:21:07.515 "subsystems": [ 00:21:07.515 { 00:21:07.515 "subsystem": "keyring", 00:21:07.515 "config": [ 00:21:07.515 { 00:21:07.515 "method": "keyring_file_add_key", 00:21:07.515 "params": { 00:21:07.515 "name": "key0", 00:21:07.515 "path": "/tmp/tmp.Ys0t5xlvZ4" 00:21:07.515 } 00:21:07.515 }, 00:21:07.515 { 00:21:07.515 "method": "keyring_file_add_key", 00:21:07.515 "params": { 00:21:07.515 "name": "key1", 00:21:07.515 "path": "/tmp/tmp.jHufBM960S" 00:21:07.515 } 00:21:07.515 } 00:21:07.515 ] 00:21:07.515 }, 00:21:07.515 { 00:21:07.515 "subsystem": "iobuf", 00:21:07.515 "config": [ 00:21:07.515 { 00:21:07.515 "method": "iobuf_set_options", 00:21:07.515 "params": { 00:21:07.515 "small_pool_count": 8192, 00:21:07.515 "large_pool_count": 1024, 00:21:07.515 "small_bufsize": 8192, 00:21:07.515 "large_bufsize": 135168, 00:21:07.515 "enable_numa": false 00:21:07.515 } 00:21:07.515 } 00:21:07.515 ] 00:21:07.515 }, 00:21:07.515 { 00:21:07.515 "subsystem": "sock", 00:21:07.515 "config": [ 00:21:07.515 { 00:21:07.515 "method": "sock_set_default_impl", 00:21:07.515 "params": { 00:21:07.515 "impl_name": "uring" 00:21:07.515 } 00:21:07.515 }, 00:21:07.515 { 00:21:07.515 "method": "sock_impl_set_options", 00:21:07.515 "params": { 00:21:07.515 "impl_name": "ssl", 00:21:07.515 "recv_buf_size": 4096, 00:21:07.515 "send_buf_size": 4096, 00:21:07.515 "enable_recv_pipe": true, 00:21:07.515 "enable_quickack": false, 00:21:07.515 "enable_placement_id": 0, 00:21:07.515 "enable_zerocopy_send_server": true, 00:21:07.515 "enable_zerocopy_send_client": false, 00:21:07.515 "zerocopy_threshold": 0, 00:21:07.515 "tls_version": 0, 00:21:07.515 "enable_ktls": false 00:21:07.515 } 00:21:07.515 }, 00:21:07.515 { 00:21:07.515 "method": "sock_impl_set_options", 00:21:07.515 "params": { 00:21:07.515 "impl_name": "posix", 00:21:07.515 "recv_buf_size": 2097152, 00:21:07.515 "send_buf_size": 2097152, 00:21:07.515 "enable_recv_pipe": true, 00:21:07.515 "enable_quickack": false, 00:21:07.515 "enable_placement_id": 0, 00:21:07.515 "enable_zerocopy_send_server": true, 00:21:07.515 "enable_zerocopy_send_client": false, 00:21:07.515 "zerocopy_threshold": 0, 00:21:07.515 "tls_version": 0, 00:21:07.515 "enable_ktls": false 00:21:07.515 } 00:21:07.515 }, 00:21:07.515 { 00:21:07.515 "method": "sock_impl_set_options", 00:21:07.515 "params": { 00:21:07.515 "impl_name": "uring", 00:21:07.515 "recv_buf_size": 2097152, 00:21:07.515 "send_buf_size": 2097152, 00:21:07.515 "enable_recv_pipe": true, 00:21:07.515 "enable_quickack": false, 00:21:07.515 "enable_placement_id": 0, 00:21:07.515 "enable_zerocopy_send_server": false, 00:21:07.515 "enable_zerocopy_send_client": false, 00:21:07.515 "zerocopy_threshold": 0, 00:21:07.515 "tls_version": 0, 00:21:07.515 "enable_ktls": false 00:21:07.515 } 00:21:07.515 } 00:21:07.515 ] 00:21:07.515 }, 00:21:07.515 { 00:21:07.515 "subsystem": "vmd", 00:21:07.515 "config": [] 00:21:07.515 }, 00:21:07.515 { 00:21:07.515 "subsystem": "accel", 00:21:07.515 "config": [ 00:21:07.515 { 00:21:07.515 "method": "accel_set_options", 00:21:07.515 "params": { 00:21:07.515 "small_cache_size": 128, 00:21:07.515 "large_cache_size": 16, 00:21:07.515 "task_count": 2048, 00:21:07.515 "sequence_count": 2048, 00:21:07.515 "buf_count": 2048 00:21:07.515 } 00:21:07.515 } 00:21:07.515 ] 00:21:07.515 }, 00:21:07.515 { 00:21:07.515 "subsystem": "bdev", 00:21:07.515 "config": [ 00:21:07.515 { 00:21:07.515 "method": "bdev_set_options", 00:21:07.515 "params": { 00:21:07.516 "bdev_io_pool_size": 65535, 00:21:07.516 "bdev_io_cache_size": 256, 00:21:07.516 "bdev_auto_examine": true, 00:21:07.516 "iobuf_small_cache_size": 128, 00:21:07.516 "iobuf_large_cache_size": 16 00:21:07.516 } 00:21:07.516 }, 00:21:07.516 { 00:21:07.516 "method": "bdev_raid_set_options", 00:21:07.516 "params": { 00:21:07.516 "process_window_size_kb": 1024, 00:21:07.516 "process_max_bandwidth_mb_sec": 0 00:21:07.516 } 00:21:07.516 }, 00:21:07.516 { 00:21:07.516 "method": "bdev_iscsi_set_options", 00:21:07.516 "params": { 00:21:07.516 "timeout_sec": 30 00:21:07.516 } 00:21:07.516 }, 00:21:07.516 { 00:21:07.516 "method": "bdev_nvme_set_options", 00:21:07.516 "params": { 00:21:07.516 "action_on_timeout": "none", 00:21:07.516 "timeout_us": 0, 00:21:07.516 "timeout_admin_us": 0, 00:21:07.516 "keep_alive_timeout_ms": 10000, 00:21:07.516 "arbitration_burst": 0, 00:21:07.516 "low_priority_weight": 0, 00:21:07.516 "medium_priority_weight": 0, 00:21:07.516 "high_priority_weight": 0, 00:21:07.516 "nvme_adminq_poll_period_us": 10000, 00:21:07.516 "nvme_ioq_poll_period_us": 0, 00:21:07.516 "io_queue_requests": 512, 00:21:07.516 "delay_cmd_submit": true, 00:21:07.516 "transport_retry_count": 4, 00:21:07.516 "bdev_retry_count": 3, 00:21:07.516 "transport_ack_timeout": 0, 00:21:07.516 "ctrlr_loss_timeout_sec": 0, 00:21:07.516 "reconnect_delay_sec": 0, 00:21:07.516 "fast_io_fail_timeout_sec": 0, 00:21:07.516 "disable_auto_failback": false, 00:21:07.516 "generate_uuids": false, 00:21:07.516 "transport_tos": 0, 00:21:07.516 "nvme_error_stat": false, 00:21:07.516 "rdma_srq_size": 0, 00:21:07.516 "io_path_stat": false, 00:21:07.516 "allow_accel_sequence": false, 00:21:07.516 "rdma_max_cq_size": 0, 00:21:07.516 "rdma_cm_event_timeout_ms": 0, 00:21:07.516 "dhchap_digests": [ 00:21:07.516 "sha256", 00:21:07.516 "sha384", 00:21:07.516 "sha512" 00:21:07.516 ], 00:21:07.516 "dhchap_dhgroups": [ 00:21:07.516 "null", 00:21:07.516 "ffdhe2048", 00:21:07.516 "ffdhe3072", 00:21:07.516 "ffdhe4096", 00:21:07.516 "ffdhe6144", 00:21:07.516 "ffdhe8192" 00:21:07.516 ] 00:21:07.516 } 00:21:07.516 }, 00:21:07.516 { 00:21:07.516 "method": "bdev_nvme_attach_controller", 00:21:07.516 "params": { 00:21:07.516 "name": "nvme0", 00:21:07.516 "trtype": "TCP", 00:21:07.516 "adrfam": "IPv4", 00:21:07.516 "traddr": "127.0.0.1", 00:21:07.516 "trsvcid": "4420", 00:21:07.516 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:07.516 "prchk_reftag": false, 00:21:07.516 "prchk_guard": false, 00:21:07.516 "ctrlr_loss_timeout_sec": 0, 00:21:07.516 "reconnect_delay_sec": 0, 00:21:07.516 "fast_io_fail_timeout_sec": 0, 00:21:07.516 "psk": "key0", 00:21:07.516 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:07.516 "hdgst": false, 00:21:07.516 "ddgst": false, 00:21:07.516 "multipath": "multipath" 00:21:07.516 } 00:21:07.516 }, 00:21:07.516 { 00:21:07.516 "method": "bdev_nvme_set_hotplug", 00:21:07.516 "params": { 00:21:07.516 "period_us": 100000, 00:21:07.516 "enable": false 00:21:07.516 } 00:21:07.516 }, 00:21:07.516 { 00:21:07.516 "method": "bdev_wait_for_examine" 00:21:07.516 } 00:21:07.516 ] 00:21:07.516 }, 00:21:07.516 { 00:21:07.516 "subsystem": "nbd", 00:21:07.516 "config": [] 00:21:07.516 } 00:21:07.516 ] 00:21:07.516 }' 00:21:07.516 14:00:06 keyring_file -- keyring/file.sh@115 -- # killprocess 85188 00:21:07.516 14:00:06 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 85188 ']' 00:21:07.516 14:00:06 keyring_file -- common/autotest_common.sh@958 -- # kill -0 85188 00:21:07.516 14:00:06 keyring_file -- common/autotest_common.sh@959 -- # uname 00:21:07.516 14:00:06 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:07.516 14:00:06 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85188 00:21:07.516 killing process with pid 85188 00:21:07.516 Received shutdown signal, test time was about 1.000000 seconds 00:21:07.516 00:21:07.516 Latency(us) 00:21:07.516 [2024-12-06T14:00:06.920Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:07.516 [2024-12-06T14:00:06.920Z] =================================================================================================================== 00:21:07.516 [2024-12-06T14:00:06.920Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:07.516 14:00:06 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:07.516 14:00:06 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:07.516 14:00:06 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85188' 00:21:07.516 14:00:06 keyring_file -- common/autotest_common.sh@973 -- # kill 85188 00:21:07.516 14:00:06 keyring_file -- common/autotest_common.sh@978 -- # wait 85188 00:21:07.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:07.776 14:00:07 keyring_file -- keyring/file.sh@118 -- # bperfpid=85428 00:21:07.776 14:00:07 keyring_file -- keyring/file.sh@120 -- # waitforlisten 85428 /var/tmp/bperf.sock 00:21:07.776 14:00:07 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 85428 ']' 00:21:07.776 14:00:07 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:07.776 14:00:07 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:07.776 14:00:07 keyring_file -- keyring/file.sh@116 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:21:07.776 14:00:07 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:07.776 14:00:07 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:21:07.776 "subsystems": [ 00:21:07.776 { 00:21:07.776 "subsystem": "keyring", 00:21:07.776 "config": [ 00:21:07.776 { 00:21:07.776 "method": "keyring_file_add_key", 00:21:07.776 "params": { 00:21:07.776 "name": "key0", 00:21:07.776 "path": "/tmp/tmp.Ys0t5xlvZ4" 00:21:07.776 } 00:21:07.776 }, 00:21:07.776 { 00:21:07.776 "method": "keyring_file_add_key", 00:21:07.776 "params": { 00:21:07.776 "name": "key1", 00:21:07.776 "path": "/tmp/tmp.jHufBM960S" 00:21:07.776 } 00:21:07.776 } 00:21:07.776 ] 00:21:07.776 }, 00:21:07.776 { 00:21:07.776 "subsystem": "iobuf", 00:21:07.776 "config": [ 00:21:07.776 { 00:21:07.776 "method": "iobuf_set_options", 00:21:07.776 "params": { 00:21:07.776 "small_pool_count": 8192, 00:21:07.776 "large_pool_count": 1024, 00:21:07.777 "small_bufsize": 8192, 00:21:07.777 "large_bufsize": 135168, 00:21:07.777 "enable_numa": false 00:21:07.777 } 00:21:07.777 } 00:21:07.777 ] 00:21:07.777 }, 00:21:07.777 { 00:21:07.777 "subsystem": "sock", 00:21:07.777 "config": [ 00:21:07.777 { 00:21:07.777 "method": "sock_set_default_impl", 00:21:07.777 "params": { 00:21:07.777 "impl_name": "uring" 00:21:07.777 } 00:21:07.777 }, 00:21:07.777 { 00:21:07.777 "method": "sock_impl_set_options", 00:21:07.777 "params": { 00:21:07.777 "impl_name": "ssl", 00:21:07.777 "recv_buf_size": 4096, 00:21:07.777 "send_buf_size": 4096, 00:21:07.777 "enable_recv_pipe": true, 00:21:07.777 "enable_quickack": false, 00:21:07.777 "enable_placement_id": 0, 00:21:07.777 "enable_zerocopy_send_server": true, 00:21:07.777 "enable_zerocopy_send_client": false, 00:21:07.777 "zerocopy_threshold": 0, 00:21:07.777 "tls_version": 0, 00:21:07.777 "enable_ktls": false 00:21:07.777 } 00:21:07.777 }, 00:21:07.777 { 00:21:07.777 "method": "sock_impl_set_options", 00:21:07.777 "params": { 00:21:07.777 "impl_name": "posix", 00:21:07.777 "recv_buf_size": 2097152, 00:21:07.777 "send_buf_size": 2097152, 00:21:07.777 "enable_recv_pipe": true, 00:21:07.777 "enable_quickack": false, 00:21:07.777 "enable_placement_id": 0, 00:21:07.777 "enable_zerocopy_send_server": true, 00:21:07.777 "enable_zerocopy_send_client": false, 00:21:07.777 "zerocopy_threshold": 0, 00:21:07.777 "tls_version": 0, 00:21:07.777 "enable_ktls": false 00:21:07.777 } 00:21:07.777 }, 00:21:07.777 { 00:21:07.777 "method": "sock_impl_set_options", 00:21:07.777 "params": { 00:21:07.777 "impl_name": "uring", 00:21:07.777 "recv_buf_size": 2097152, 00:21:07.777 "send_buf_size": 2097152, 00:21:07.777 "enable_recv_pipe": true, 00:21:07.777 "enable_quickack": false, 00:21:07.777 "enable_placement_id": 0, 00:21:07.777 "enable_zerocopy_send_server": false, 00:21:07.777 "enable_zerocopy_send_client": false, 00:21:07.777 "zerocopy_threshold": 0, 00:21:07.777 "tls_version": 0, 00:21:07.777 "enable_ktls": false 00:21:07.777 } 00:21:07.777 } 00:21:07.777 ] 00:21:07.777 }, 00:21:07.777 { 00:21:07.777 "subsystem": "vmd", 00:21:07.777 "config": [] 00:21:07.777 }, 00:21:07.777 { 00:21:07.777 "subsystem": "accel", 00:21:07.777 "config": [ 00:21:07.777 { 00:21:07.777 "method": "accel_set_options", 00:21:07.777 "params": { 00:21:07.777 "small_cache_size": 128, 00:21:07.777 "large_cache_size": 16, 00:21:07.777 "task_count": 2048, 00:21:07.777 "sequence_count": 2048, 00:21:07.777 "buf_count": 2048 00:21:07.777 } 00:21:07.777 } 00:21:07.777 ] 00:21:07.777 }, 00:21:07.777 { 00:21:07.777 "subsystem": "bdev", 00:21:07.777 "config": [ 00:21:07.777 { 00:21:07.777 "method": "bdev_set_options", 00:21:07.777 "params": { 00:21:07.777 "bdev_io_pool_size": 65535, 00:21:07.777 "bdev_io_cache_size": 256, 00:21:07.777 "bdev_auto_examine": true, 00:21:07.777 "iobuf_small_cache_size": 128, 00:21:07.777 "iobuf_large_cache_size": 16 00:21:07.777 } 00:21:07.777 }, 00:21:07.777 { 00:21:07.777 "method": "bdev_raid_set_options", 00:21:07.777 "params": { 00:21:07.777 "process_window_size_kb": 1024, 00:21:07.777 "process_max_bandwidth_mb_sec": 0 00:21:07.777 } 00:21:07.777 }, 00:21:07.777 { 00:21:07.777 "method": "bdev_iscsi_set_options", 00:21:07.777 "params": { 00:21:07.777 "timeout_sec": 30 00:21:07.777 } 00:21:07.777 }, 00:21:07.777 { 00:21:07.777 "method": "bdev_nvme_set_options", 00:21:07.777 "params": { 00:21:07.777 "action_on_timeout": "none", 00:21:07.777 "timeout_us": 0, 00:21:07.777 "timeout_admin_us": 0, 00:21:07.777 "keep_alive_timeout_ms": 10000, 00:21:07.777 "arbitration_burst": 0, 00:21:07.777 "low_priority_weight": 0, 00:21:07.777 "medium_priority_weight": 0, 00:21:07.777 "high_priority_weight": 0, 00:21:07.777 "nvme_adminq_poll_period_us": 10000, 00:21:07.777 "nvme_io 14:00:07 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:07.777 q_poll_period_us": 0, 00:21:07.777 "io_queue_requests": 512, 00:21:07.777 "delay_cmd_submit": true, 00:21:07.777 "transport_retry_count": 4, 00:21:07.777 "bdev_retry_count": 3, 00:21:07.777 "transport_ack_timeout": 0, 00:21:07.777 "ctrlr_loss_timeout_sec": 0, 00:21:07.777 "reconnect_delay_sec": 0, 00:21:07.777 "fast_io_fail_timeout_sec": 0, 00:21:07.777 "disable_auto_failback": false, 00:21:07.777 "generate_uuids": false, 00:21:07.777 "transport_tos": 0, 00:21:07.777 "nvme_error_stat": false, 00:21:07.777 "rdma_srq_size": 0, 00:21:07.777 "io_path_stat": false, 00:21:07.777 "allow_accel_sequence": false, 00:21:07.777 "rdma_max_cq_size": 0, 00:21:07.777 "rdma_cm_event_timeout_ms": 0, 00:21:07.777 "dhchap_digests": [ 00:21:07.777 "sha256", 00:21:07.777 "sha384", 00:21:07.777 "sha512" 00:21:07.777 ], 00:21:07.777 "dhchap_dhgroups": [ 00:21:07.777 "null", 00:21:07.777 "ffdhe2048", 00:21:07.777 "ffdhe3072", 00:21:07.777 "ffdhe4096", 00:21:07.777 "ffdhe6144", 00:21:07.777 "ffdhe8192" 00:21:07.777 ] 00:21:07.778 } 00:21:07.778 }, 00:21:07.778 { 00:21:07.778 "method": "bdev_nvme_attach_controller", 00:21:07.778 "params": { 00:21:07.778 "name": "nvme0", 00:21:07.778 "trtype": "TCP", 00:21:07.778 "adrfam": "IPv4", 00:21:07.778 "traddr": "127.0.0.1", 00:21:07.778 "trsvcid": "4420", 00:21:07.778 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:07.778 "prchk_reftag": false, 00:21:07.778 "prchk_guard": false, 00:21:07.778 "ctrlr_loss_timeout_sec": 0, 00:21:07.778 "reconnect_delay_sec": 0, 00:21:07.778 "fast_io_fail_timeout_sec": 0, 00:21:07.778 "psk": "key0", 00:21:07.778 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:07.778 "hdgst": false, 00:21:07.778 "ddgst": false, 00:21:07.778 "multipath": "multipath" 00:21:07.778 } 00:21:07.778 }, 00:21:07.778 { 00:21:07.778 "method": "bdev_nvme_set_hotplug", 00:21:07.778 "params": { 00:21:07.778 "period_us": 100000, 00:21:07.778 "enable": false 00:21:07.778 } 00:21:07.778 }, 00:21:07.778 { 00:21:07.778 "method": "bdev_wait_for_examine" 00:21:07.778 } 00:21:07.778 ] 00:21:07.778 }, 00:21:07.778 { 00:21:07.778 "subsystem": "nbd", 00:21:07.778 "config": [] 00:21:07.778 } 00:21:07.778 ] 00:21:07.778 }' 00:21:07.778 14:00:07 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:07.778 [2024-12-06 14:00:07.121747] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:21:07.778 [2024-12-06 14:00:07.122890] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85428 ] 00:21:08.037 [2024-12-06 14:00:07.259230] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:08.037 [2024-12-06 14:00:07.305277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:08.037 [2024-12-06 14:00:07.435903] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:08.297 [2024-12-06 14:00:07.490341] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:08.867 14:00:08 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:08.867 14:00:08 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:21:08.867 14:00:08 keyring_file -- keyring/file.sh@121 -- # jq length 00:21:08.867 14:00:08 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:21:08.867 14:00:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:08.867 14:00:08 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:21:08.867 14:00:08 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:21:08.867 14:00:08 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:08.867 14:00:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:08.867 14:00:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:08.867 14:00:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:08.867 14:00:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:09.126 14:00:08 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:21:09.126 14:00:08 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:21:09.126 14:00:08 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:09.126 14:00:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:09.126 14:00:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:09.126 14:00:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:09.126 14:00:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:09.385 14:00:08 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:21:09.385 14:00:08 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:21:09.385 14:00:08 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:21:09.385 14:00:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:21:09.644 14:00:08 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:21:09.644 14:00:08 keyring_file -- keyring/file.sh@1 -- # cleanup 00:21:09.644 14:00:08 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.Ys0t5xlvZ4 /tmp/tmp.jHufBM960S 00:21:09.644 14:00:08 keyring_file -- keyring/file.sh@20 -- # killprocess 85428 00:21:09.644 14:00:08 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 85428 ']' 00:21:09.644 14:00:08 keyring_file -- common/autotest_common.sh@958 -- # kill -0 85428 00:21:09.644 14:00:08 keyring_file -- common/autotest_common.sh@959 -- # uname 00:21:09.644 14:00:08 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:09.644 14:00:08 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85428 00:21:09.644 killing process with pid 85428 00:21:09.644 Received shutdown signal, test time was about 1.000000 seconds 00:21:09.644 00:21:09.644 Latency(us) 00:21:09.644 [2024-12-06T14:00:09.048Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:09.644 [2024-12-06T14:00:09.048Z] =================================================================================================================== 00:21:09.644 [2024-12-06T14:00:09.048Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:09.644 14:00:08 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:09.644 14:00:08 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:09.644 14:00:08 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85428' 00:21:09.644 14:00:08 keyring_file -- common/autotest_common.sh@973 -- # kill 85428 00:21:09.644 14:00:08 keyring_file -- common/autotest_common.sh@978 -- # wait 85428 00:21:09.904 14:00:09 keyring_file -- keyring/file.sh@21 -- # killprocess 85178 00:21:09.904 14:00:09 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 85178 ']' 00:21:09.904 14:00:09 keyring_file -- common/autotest_common.sh@958 -- # kill -0 85178 00:21:09.904 14:00:09 keyring_file -- common/autotest_common.sh@959 -- # uname 00:21:09.904 14:00:09 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:09.904 14:00:09 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85178 00:21:09.904 14:00:09 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:09.904 14:00:09 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:09.904 14:00:09 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85178' 00:21:09.904 killing process with pid 85178 00:21:09.904 14:00:09 keyring_file -- common/autotest_common.sh@973 -- # kill 85178 00:21:09.904 14:00:09 keyring_file -- common/autotest_common.sh@978 -- # wait 85178 00:21:10.472 00:21:10.472 real 0m14.122s 00:21:10.472 user 0m34.979s 00:21:10.472 sys 0m2.923s 00:21:10.472 14:00:09 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:10.472 14:00:09 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:10.472 ************************************ 00:21:10.472 END TEST keyring_file 00:21:10.472 ************************************ 00:21:10.472 14:00:09 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:21:10.472 14:00:09 -- spdk/autotest.sh@294 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:21:10.472 14:00:09 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:10.472 14:00:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:10.472 14:00:09 -- common/autotest_common.sh@10 -- # set +x 00:21:10.472 ************************************ 00:21:10.472 START TEST keyring_linux 00:21:10.472 ************************************ 00:21:10.472 14:00:09 keyring_linux -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:21:10.472 Joined session keyring: 212201203 00:21:10.472 * Looking for test storage... 00:21:10.472 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:21:10.472 14:00:09 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:10.472 14:00:09 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version 00:21:10.473 14:00:09 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:10.733 14:00:09 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:10.733 14:00:09 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:10.733 14:00:09 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:10.733 14:00:09 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:10.733 14:00:09 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:21:10.733 14:00:09 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:21:10.733 14:00:09 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:21:10.733 14:00:09 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:21:10.733 14:00:09 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:21:10.733 14:00:09 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:21:10.733 14:00:09 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:21:10.733 14:00:09 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:10.733 14:00:09 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:21:10.733 14:00:09 keyring_linux -- scripts/common.sh@345 -- # : 1 00:21:10.733 14:00:09 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:10.733 14:00:09 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:10.733 14:00:09 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:21:10.733 14:00:09 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:21:10.733 14:00:09 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:10.733 14:00:09 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:21:10.733 14:00:09 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:21:10.733 14:00:09 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:21:10.733 14:00:09 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:21:10.733 14:00:09 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:10.733 14:00:09 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:21:10.733 14:00:09 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:21:10.733 14:00:09 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:10.733 14:00:09 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:10.733 14:00:09 keyring_linux -- scripts/common.sh@368 -- # return 0 00:21:10.733 14:00:09 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:10.733 14:00:09 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:10.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:10.733 --rc genhtml_branch_coverage=1 00:21:10.733 --rc genhtml_function_coverage=1 00:21:10.733 --rc genhtml_legend=1 00:21:10.733 --rc geninfo_all_blocks=1 00:21:10.733 --rc geninfo_unexecuted_blocks=1 00:21:10.733 00:21:10.733 ' 00:21:10.733 14:00:09 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:10.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:10.733 --rc genhtml_branch_coverage=1 00:21:10.733 --rc genhtml_function_coverage=1 00:21:10.733 --rc genhtml_legend=1 00:21:10.733 --rc geninfo_all_blocks=1 00:21:10.733 --rc geninfo_unexecuted_blocks=1 00:21:10.733 00:21:10.733 ' 00:21:10.733 14:00:09 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:10.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:10.733 --rc genhtml_branch_coverage=1 00:21:10.733 --rc genhtml_function_coverage=1 00:21:10.733 --rc genhtml_legend=1 00:21:10.733 --rc geninfo_all_blocks=1 00:21:10.733 --rc geninfo_unexecuted_blocks=1 00:21:10.733 00:21:10.733 ' 00:21:10.733 14:00:09 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:10.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:10.733 --rc genhtml_branch_coverage=1 00:21:10.733 --rc genhtml_function_coverage=1 00:21:10.733 --rc genhtml_legend=1 00:21:10.733 --rc geninfo_all_blocks=1 00:21:10.733 --rc geninfo_unexecuted_blocks=1 00:21:10.733 00:21:10.733 ' 00:21:10.733 14:00:09 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:21:10.733 14:00:09 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:10.733 14:00:09 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:21:10.733 14:00:09 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:10.733 14:00:09 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:10.733 14:00:09 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:10.733 14:00:09 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:10.733 14:00:09 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:10.733 14:00:09 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:10.733 14:00:09 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:10.733 14:00:09 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:10.733 14:00:09 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:10.733 14:00:09 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:10.733 14:00:09 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cfa2def7-c8af-457f-82a0-b312efdea7f4 00:21:10.733 14:00:09 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=cfa2def7-c8af-457f-82a0-b312efdea7f4 00:21:10.733 14:00:09 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:10.733 14:00:09 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:10.733 14:00:09 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:10.733 14:00:09 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:10.733 14:00:09 keyring_linux -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:10.733 14:00:09 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:21:10.733 14:00:09 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:10.733 14:00:09 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:10.733 14:00:09 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:10.733 14:00:09 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.733 14:00:09 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.733 14:00:09 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.733 14:00:09 keyring_linux -- paths/export.sh@5 -- # export PATH 00:21:10.733 14:00:09 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.733 14:00:09 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:21:10.733 14:00:09 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:10.733 14:00:09 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:10.733 14:00:09 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:10.733 14:00:09 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:10.733 14:00:09 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:10.733 14:00:09 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:10.733 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:10.733 14:00:09 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:10.733 14:00:09 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:10.733 14:00:09 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:10.733 14:00:09 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:21:10.733 14:00:09 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:21:10.733 14:00:09 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:21:10.733 14:00:09 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:21:10.733 14:00:09 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:21:10.733 14:00:09 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:21:10.733 14:00:09 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:21:10.733 14:00:09 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:21:10.733 14:00:09 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:21:10.733 14:00:09 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:21:10.733 14:00:09 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:21:10.733 14:00:09 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:21:10.733 14:00:09 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:21:10.733 14:00:09 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:21:10.733 14:00:09 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:21:10.733 14:00:09 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:21:10.733 14:00:09 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:21:10.733 14:00:09 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:21:10.733 14:00:09 keyring_linux -- nvmf/common.sh@733 -- # python - 00:21:10.733 14:00:10 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:21:10.733 /tmp/:spdk-test:key0 00:21:10.733 14:00:10 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:21:10.733 14:00:10 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:21:10.733 14:00:10 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:21:10.733 14:00:10 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:21:10.733 14:00:10 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:21:10.733 14:00:10 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:21:10.733 14:00:10 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:21:10.733 14:00:10 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:21:10.733 14:00:10 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:21:10.733 14:00:10 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:21:10.733 14:00:10 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:21:10.733 14:00:10 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:21:10.733 14:00:10 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:21:10.733 14:00:10 keyring_linux -- nvmf/common.sh@733 -- # python - 00:21:10.733 14:00:10 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:21:10.733 /tmp/:spdk-test:key1 00:21:10.733 14:00:10 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:21:10.733 14:00:10 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=85551 00:21:10.734 14:00:10 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:10.734 14:00:10 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 85551 00:21:10.734 14:00:10 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 85551 ']' 00:21:10.734 14:00:10 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:10.734 14:00:10 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:10.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:10.734 14:00:10 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:10.734 14:00:10 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:10.734 14:00:10 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:21:10.734 [2024-12-06 14:00:10.131748] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:21:10.734 [2024-12-06 14:00:10.131853] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85551 ] 00:21:10.992 [2024-12-06 14:00:10.272521] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:10.992 [2024-12-06 14:00:10.322628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:11.251 [2024-12-06 14:00:10.408453] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:11.251 14:00:10 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:11.251 14:00:10 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:21:11.251 14:00:10 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:21:11.251 14:00:10 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.251 14:00:10 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:21:11.251 [2024-12-06 14:00:10.647533] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:11.510 null0 00:21:11.510 [2024-12-06 14:00:10.679502] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:11.510 [2024-12-06 14:00:10.679707] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:21:11.510 14:00:10 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.510 14:00:10 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:21:11.510 807117386 00:21:11.510 14:00:10 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:21:11.510 237332266 00:21:11.510 14:00:10 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=85567 00:21:11.510 14:00:10 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:21:11.510 14:00:10 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 85567 /var/tmp/bperf.sock 00:21:11.510 14:00:10 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 85567 ']' 00:21:11.510 14:00:10 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:11.510 14:00:10 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:11.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:11.510 14:00:10 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:11.510 14:00:10 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:11.510 14:00:10 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:21:11.510 [2024-12-06 14:00:10.758204] Starting SPDK v25.01-pre git sha1 37ef4f42e / DPDK 24.03.0 initialization... 00:21:11.510 [2024-12-06 14:00:10.758307] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85567 ] 00:21:11.510 [2024-12-06 14:00:10.910111] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:11.768 [2024-12-06 14:00:10.960726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:12.337 14:00:11 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:12.337 14:00:11 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:21:12.337 14:00:11 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:21:12.337 14:00:11 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:21:12.596 14:00:11 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:21:12.596 14:00:11 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:12.855 [2024-12-06 14:00:12.202290] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:12.855 14:00:12 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:21:12.855 14:00:12 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:21:13.114 [2024-12-06 14:00:12.447316] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:13.373 nvme0n1 00:21:13.373 14:00:12 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:21:13.373 14:00:12 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:21:13.373 14:00:12 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:21:13.373 14:00:12 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:21:13.373 14:00:12 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:13.373 14:00:12 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:21:13.633 14:00:12 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:21:13.633 14:00:12 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:21:13.633 14:00:12 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:21:13.633 14:00:12 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:21:13.633 14:00:12 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:21:13.633 14:00:12 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:13.633 14:00:12 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:13.892 14:00:13 keyring_linux -- keyring/linux.sh@25 -- # sn=807117386 00:21:13.892 14:00:13 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:21:13.892 14:00:13 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:21:13.892 14:00:13 keyring_linux -- keyring/linux.sh@26 -- # [[ 807117386 == \8\0\7\1\1\7\3\8\6 ]] 00:21:13.892 14:00:13 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 807117386 00:21:13.892 14:00:13 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:21:13.892 14:00:13 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:13.892 Running I/O for 1 seconds... 00:21:14.830 13259.00 IOPS, 51.79 MiB/s 00:21:14.830 Latency(us) 00:21:14.830 [2024-12-06T14:00:14.234Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:14.830 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:21:14.830 nvme0n1 : 1.01 13263.21 51.81 0.00 0.00 9601.13 4736.47 13047.62 00:21:14.830 [2024-12-06T14:00:14.234Z] =================================================================================================================== 00:21:14.830 [2024-12-06T14:00:14.234Z] Total : 13263.21 51.81 0.00 0.00 9601.13 4736.47 13047.62 00:21:14.830 { 00:21:14.830 "results": [ 00:21:14.830 { 00:21:14.830 "job": "nvme0n1", 00:21:14.830 "core_mask": "0x2", 00:21:14.830 "workload": "randread", 00:21:14.830 "status": "finished", 00:21:14.830 "queue_depth": 128, 00:21:14.830 "io_size": 4096, 00:21:14.830 "runtime": 1.009409, 00:21:14.830 "iops": 13263.206490134326, 00:21:14.830 "mibps": 51.80940035208721, 00:21:14.830 "io_failed": 0, 00:21:14.830 "io_timeout": 0, 00:21:14.830 "avg_latency_us": 9601.128800282479, 00:21:14.830 "min_latency_us": 4736.465454545454, 00:21:14.830 "max_latency_us": 13047.621818181819 00:21:14.830 } 00:21:14.830 ], 00:21:14.830 "core_count": 1 00:21:14.830 } 00:21:14.830 14:00:14 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:21:14.830 14:00:14 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:21:15.089 14:00:14 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:21:15.089 14:00:14 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:21:15.089 14:00:14 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:21:15.089 14:00:14 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:21:15.089 14:00:14 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:21:15.089 14:00:14 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:15.349 14:00:14 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:21:15.349 14:00:14 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:21:15.349 14:00:14 keyring_linux -- keyring/linux.sh@23 -- # return 00:21:15.349 14:00:14 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:21:15.349 14:00:14 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:21:15.349 14:00:14 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:21:15.349 14:00:14 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:21:15.349 14:00:14 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:15.349 14:00:14 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:21:15.349 14:00:14 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:15.349 14:00:14 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:21:15.349 14:00:14 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:21:15.608 [2024-12-06 14:00:14.854513] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:15.608 [2024-12-06 14:00:14.855251] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12eab90 (107): Transport endpoint is not connected 00:21:15.609 [2024-12-06 14:00:14.856222] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12eab90 (9): Bad file descriptor 00:21:15.609 [2024-12-06 14:00:14.857219] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:21:15.609 [2024-12-06 14:00:14.857250] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:21:15.609 [2024-12-06 14:00:14.857259] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:21:15.609 [2024-12-06 14:00:14.857269] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:21:15.609 request: 00:21:15.609 { 00:21:15.609 "name": "nvme0", 00:21:15.609 "trtype": "tcp", 00:21:15.609 "traddr": "127.0.0.1", 00:21:15.609 "adrfam": "ipv4", 00:21:15.609 "trsvcid": "4420", 00:21:15.609 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:15.609 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:15.609 "prchk_reftag": false, 00:21:15.609 "prchk_guard": false, 00:21:15.609 "hdgst": false, 00:21:15.609 "ddgst": false, 00:21:15.609 "psk": ":spdk-test:key1", 00:21:15.609 "allow_unrecognized_csi": false, 00:21:15.609 "method": "bdev_nvme_attach_controller", 00:21:15.609 "req_id": 1 00:21:15.609 } 00:21:15.609 Got JSON-RPC error response 00:21:15.609 response: 00:21:15.609 { 00:21:15.609 "code": -5, 00:21:15.609 "message": "Input/output error" 00:21:15.609 } 00:21:15.609 14:00:14 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:21:15.609 14:00:14 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:15.609 14:00:14 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:15.609 14:00:14 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:15.609 14:00:14 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:21:15.609 14:00:14 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:21:15.609 14:00:14 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:21:15.609 14:00:14 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:21:15.609 14:00:14 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:21:15.609 14:00:14 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:21:15.609 14:00:14 keyring_linux -- keyring/linux.sh@33 -- # sn=807117386 00:21:15.609 14:00:14 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 807117386 00:21:15.609 1 links removed 00:21:15.609 14:00:14 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:21:15.609 14:00:14 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:21:15.609 14:00:14 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:21:15.609 14:00:14 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:21:15.609 14:00:14 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:21:15.609 14:00:14 keyring_linux -- keyring/linux.sh@33 -- # sn=237332266 00:21:15.609 14:00:14 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 237332266 00:21:15.609 1 links removed 00:21:15.609 14:00:14 keyring_linux -- keyring/linux.sh@41 -- # killprocess 85567 00:21:15.609 14:00:14 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 85567 ']' 00:21:15.609 14:00:14 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 85567 00:21:15.609 14:00:14 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:21:15.609 14:00:14 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:15.609 14:00:14 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85567 00:21:15.609 14:00:14 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:15.609 14:00:14 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:15.609 killing process with pid 85567 00:21:15.609 14:00:14 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85567' 00:21:15.609 14:00:14 keyring_linux -- common/autotest_common.sh@973 -- # kill 85567 00:21:15.609 Received shutdown signal, test time was about 1.000000 seconds 00:21:15.609 00:21:15.609 Latency(us) 00:21:15.609 [2024-12-06T14:00:15.013Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:15.609 [2024-12-06T14:00:15.013Z] =================================================================================================================== 00:21:15.609 [2024-12-06T14:00:15.013Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:15.609 14:00:14 keyring_linux -- common/autotest_common.sh@978 -- # wait 85567 00:21:15.867 14:00:15 keyring_linux -- keyring/linux.sh@42 -- # killprocess 85551 00:21:15.867 14:00:15 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 85551 ']' 00:21:15.867 14:00:15 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 85551 00:21:15.867 14:00:15 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:21:15.867 14:00:15 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:15.867 14:00:15 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85551 00:21:15.867 14:00:15 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:15.867 14:00:15 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:15.867 killing process with pid 85551 00:21:15.867 14:00:15 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85551' 00:21:15.867 14:00:15 keyring_linux -- common/autotest_common.sh@973 -- # kill 85551 00:21:15.867 14:00:15 keyring_linux -- common/autotest_common.sh@978 -- # wait 85551 00:21:16.433 00:21:16.433 real 0m5.880s 00:21:16.433 user 0m11.107s 00:21:16.433 sys 0m1.669s 00:21:16.433 14:00:15 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:16.433 14:00:15 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:21:16.433 ************************************ 00:21:16.433 END TEST keyring_linux 00:21:16.433 ************************************ 00:21:16.433 14:00:15 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:21:16.433 14:00:15 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:21:16.433 14:00:15 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:21:16.433 14:00:15 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:21:16.433 14:00:15 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:21:16.433 14:00:15 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:21:16.433 14:00:15 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:21:16.433 14:00:15 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:21:16.433 14:00:15 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:21:16.433 14:00:15 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:21:16.433 14:00:15 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:21:16.433 14:00:15 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:21:16.433 14:00:15 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:21:16.433 14:00:15 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:21:16.433 14:00:15 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:21:16.433 14:00:15 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:21:16.433 14:00:15 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:21:16.433 14:00:15 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:16.433 14:00:15 -- common/autotest_common.sh@10 -- # set +x 00:21:16.433 14:00:15 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:21:16.433 14:00:15 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:21:16.433 14:00:15 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:21:16.433 14:00:15 -- common/autotest_common.sh@10 -- # set +x 00:21:18.335 INFO: APP EXITING 00:21:18.335 INFO: killing all VMs 00:21:18.335 INFO: killing vhost app 00:21:18.335 INFO: EXIT DONE 00:21:18.903 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:18.903 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:21:18.903 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:21:19.473 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:19.733 Cleaning 00:21:19.733 Removing: /var/run/dpdk/spdk0/config 00:21:19.733 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:21:19.733 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:21:19.733 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:21:19.733 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:21:19.733 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:21:19.733 Removing: /var/run/dpdk/spdk0/hugepage_info 00:21:19.733 Removing: /var/run/dpdk/spdk1/config 00:21:19.733 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:21:19.733 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:21:19.733 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:21:19.733 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:21:19.733 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:21:19.733 Removing: /var/run/dpdk/spdk1/hugepage_info 00:21:19.733 Removing: /var/run/dpdk/spdk2/config 00:21:19.733 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:21:19.733 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:21:19.733 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:21:19.733 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:21:19.733 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:21:19.733 Removing: /var/run/dpdk/spdk2/hugepage_info 00:21:19.733 Removing: /var/run/dpdk/spdk3/config 00:21:19.733 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:21:19.733 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:21:19.733 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:21:19.733 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:21:19.733 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:21:19.733 Removing: /var/run/dpdk/spdk3/hugepage_info 00:21:19.733 Removing: /var/run/dpdk/spdk4/config 00:21:19.733 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:21:19.733 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:21:19.733 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:21:19.733 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:21:19.733 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:21:19.733 Removing: /var/run/dpdk/spdk4/hugepage_info 00:21:19.733 Removing: /dev/shm/nvmf_trace.0 00:21:19.733 Removing: /dev/shm/spdk_tgt_trace.pid56684 00:21:19.733 Removing: /var/run/dpdk/spdk0 00:21:19.733 Removing: /var/run/dpdk/spdk1 00:21:19.733 Removing: /var/run/dpdk/spdk2 00:21:19.733 Removing: /var/run/dpdk/spdk3 00:21:19.733 Removing: /var/run/dpdk/spdk4 00:21:19.733 Removing: /var/run/dpdk/spdk_pid56520 00:21:19.733 Removing: /var/run/dpdk/spdk_pid56684 00:21:19.733 Removing: /var/run/dpdk/spdk_pid56883 00:21:19.733 Removing: /var/run/dpdk/spdk_pid56969 00:21:19.733 Removing: /var/run/dpdk/spdk_pid56997 00:21:19.733 Removing: /var/run/dpdk/spdk_pid57106 00:21:19.733 Removing: /var/run/dpdk/spdk_pid57117 00:21:19.733 Removing: /var/run/dpdk/spdk_pid57256 00:21:19.733 Removing: /var/run/dpdk/spdk_pid57452 00:21:19.733 Removing: /var/run/dpdk/spdk_pid57606 00:21:19.733 Removing: /var/run/dpdk/spdk_pid57685 00:21:19.733 Removing: /var/run/dpdk/spdk_pid57766 00:21:19.733 Removing: /var/run/dpdk/spdk_pid57857 00:21:19.733 Removing: /var/run/dpdk/spdk_pid57936 00:21:19.733 Removing: /var/run/dpdk/spdk_pid57969 00:21:19.733 Removing: /var/run/dpdk/spdk_pid58003 00:21:19.733 Removing: /var/run/dpdk/spdk_pid58074 00:21:19.733 Removing: /var/run/dpdk/spdk_pid58166 00:21:19.733 Removing: /var/run/dpdk/spdk_pid58599 00:21:19.733 Removing: /var/run/dpdk/spdk_pid58638 00:21:19.733 Removing: /var/run/dpdk/spdk_pid58689 00:21:19.733 Removing: /var/run/dpdk/spdk_pid58705 00:21:19.733 Removing: /var/run/dpdk/spdk_pid58778 00:21:19.733 Removing: /var/run/dpdk/spdk_pid58794 00:21:19.733 Removing: /var/run/dpdk/spdk_pid58866 00:21:19.733 Removing: /var/run/dpdk/spdk_pid58869 00:21:19.733 Removing: /var/run/dpdk/spdk_pid58920 00:21:19.733 Removing: /var/run/dpdk/spdk_pid58932 00:21:19.733 Removing: /var/run/dpdk/spdk_pid58976 00:21:19.733 Removing: /var/run/dpdk/spdk_pid58994 00:21:19.733 Removing: /var/run/dpdk/spdk_pid59130 00:21:19.993 Removing: /var/run/dpdk/spdk_pid59160 00:21:19.993 Removing: /var/run/dpdk/spdk_pid59243 00:21:19.993 Removing: /var/run/dpdk/spdk_pid59581 00:21:19.993 Removing: /var/run/dpdk/spdk_pid59594 00:21:19.993 Removing: /var/run/dpdk/spdk_pid59625 00:21:19.993 Removing: /var/run/dpdk/spdk_pid59644 00:21:19.993 Removing: /var/run/dpdk/spdk_pid59660 00:21:19.993 Removing: /var/run/dpdk/spdk_pid59679 00:21:19.993 Removing: /var/run/dpdk/spdk_pid59698 00:21:19.993 Removing: /var/run/dpdk/spdk_pid59713 00:21:19.993 Removing: /var/run/dpdk/spdk_pid59732 00:21:19.993 Removing: /var/run/dpdk/spdk_pid59746 00:21:19.993 Removing: /var/run/dpdk/spdk_pid59767 00:21:19.993 Removing: /var/run/dpdk/spdk_pid59786 00:21:19.993 Removing: /var/run/dpdk/spdk_pid59799 00:21:19.993 Removing: /var/run/dpdk/spdk_pid59815 00:21:19.993 Removing: /var/run/dpdk/spdk_pid59834 00:21:19.993 Removing: /var/run/dpdk/spdk_pid59853 00:21:19.993 Removing: /var/run/dpdk/spdk_pid59868 00:21:19.993 Removing: /var/run/dpdk/spdk_pid59887 00:21:19.993 Removing: /var/run/dpdk/spdk_pid59901 00:21:19.993 Removing: /var/run/dpdk/spdk_pid59922 00:21:19.993 Removing: /var/run/dpdk/spdk_pid59952 00:21:19.993 Removing: /var/run/dpdk/spdk_pid59966 00:21:19.993 Removing: /var/run/dpdk/spdk_pid60001 00:21:19.993 Removing: /var/run/dpdk/spdk_pid60073 00:21:19.993 Removing: /var/run/dpdk/spdk_pid60096 00:21:19.993 Removing: /var/run/dpdk/spdk_pid60111 00:21:19.993 Removing: /var/run/dpdk/spdk_pid60145 00:21:19.993 Removing: /var/run/dpdk/spdk_pid60151 00:21:19.993 Removing: /var/run/dpdk/spdk_pid60164 00:21:19.993 Removing: /var/run/dpdk/spdk_pid60201 00:21:19.993 Removing: /var/run/dpdk/spdk_pid60221 00:21:19.993 Removing: /var/run/dpdk/spdk_pid60255 00:21:19.993 Removing: /var/run/dpdk/spdk_pid60259 00:21:19.993 Removing: /var/run/dpdk/spdk_pid60274 00:21:19.993 Removing: /var/run/dpdk/spdk_pid60278 00:21:19.993 Removing: /var/run/dpdk/spdk_pid60293 00:21:19.993 Removing: /var/run/dpdk/spdk_pid60297 00:21:19.993 Removing: /var/run/dpdk/spdk_pid60312 00:21:19.993 Removing: /var/run/dpdk/spdk_pid60322 00:21:19.993 Removing: /var/run/dpdk/spdk_pid60350 00:21:19.993 Removing: /var/run/dpdk/spdk_pid60382 00:21:19.993 Removing: /var/run/dpdk/spdk_pid60386 00:21:19.993 Removing: /var/run/dpdk/spdk_pid60422 00:21:19.993 Removing: /var/run/dpdk/spdk_pid60432 00:21:19.993 Removing: /var/run/dpdk/spdk_pid60439 00:21:19.993 Removing: /var/run/dpdk/spdk_pid60484 00:21:19.993 Removing: /var/run/dpdk/spdk_pid60491 00:21:19.993 Removing: /var/run/dpdk/spdk_pid60523 00:21:19.993 Removing: /var/run/dpdk/spdk_pid60531 00:21:19.993 Removing: /var/run/dpdk/spdk_pid60538 00:21:19.993 Removing: /var/run/dpdk/spdk_pid60550 00:21:19.993 Removing: /var/run/dpdk/spdk_pid60553 00:21:19.993 Removing: /var/run/dpdk/spdk_pid60566 00:21:19.993 Removing: /var/run/dpdk/spdk_pid60574 00:21:19.993 Removing: /var/run/dpdk/spdk_pid60581 00:21:19.993 Removing: /var/run/dpdk/spdk_pid60663 00:21:19.993 Removing: /var/run/dpdk/spdk_pid60713 00:21:19.993 Removing: /var/run/dpdk/spdk_pid60825 00:21:19.993 Removing: /var/run/dpdk/spdk_pid60860 00:21:19.993 Removing: /var/run/dpdk/spdk_pid60904 00:21:19.993 Removing: /var/run/dpdk/spdk_pid60924 00:21:19.993 Removing: /var/run/dpdk/spdk_pid60947 00:21:19.993 Removing: /var/run/dpdk/spdk_pid60961 00:21:19.993 Removing: /var/run/dpdk/spdk_pid60993 00:21:19.993 Removing: /var/run/dpdk/spdk_pid61014 00:21:19.993 Removing: /var/run/dpdk/spdk_pid61092 00:21:19.993 Removing: /var/run/dpdk/spdk_pid61113 00:21:19.993 Removing: /var/run/dpdk/spdk_pid61152 00:21:19.993 Removing: /var/run/dpdk/spdk_pid61227 00:21:19.993 Removing: /var/run/dpdk/spdk_pid61282 00:21:19.993 Removing: /var/run/dpdk/spdk_pid61316 00:21:19.993 Removing: /var/run/dpdk/spdk_pid61409 00:21:19.993 Removing: /var/run/dpdk/spdk_pid61457 00:21:19.993 Removing: /var/run/dpdk/spdk_pid61495 00:21:19.993 Removing: /var/run/dpdk/spdk_pid61722 00:21:19.993 Removing: /var/run/dpdk/spdk_pid61819 00:21:19.993 Removing: /var/run/dpdk/spdk_pid61853 00:21:20.253 Removing: /var/run/dpdk/spdk_pid61877 00:21:20.253 Removing: /var/run/dpdk/spdk_pid61915 00:21:20.253 Removing: /var/run/dpdk/spdk_pid61950 00:21:20.253 Removing: /var/run/dpdk/spdk_pid61983 00:21:20.253 Removing: /var/run/dpdk/spdk_pid62015 00:21:20.253 Removing: /var/run/dpdk/spdk_pid62405 00:21:20.253 Removing: /var/run/dpdk/spdk_pid62445 00:21:20.253 Removing: /var/run/dpdk/spdk_pid62797 00:21:20.253 Removing: /var/run/dpdk/spdk_pid63266 00:21:20.253 Removing: /var/run/dpdk/spdk_pid63537 00:21:20.253 Removing: /var/run/dpdk/spdk_pid64422 00:21:20.253 Removing: /var/run/dpdk/spdk_pid65337 00:21:20.253 Removing: /var/run/dpdk/spdk_pid65460 00:21:20.253 Removing: /var/run/dpdk/spdk_pid65528 00:21:20.253 Removing: /var/run/dpdk/spdk_pid66954 00:21:20.253 Removing: /var/run/dpdk/spdk_pid67271 00:21:20.253 Removing: /var/run/dpdk/spdk_pid70891 00:21:20.253 Removing: /var/run/dpdk/spdk_pid71252 00:21:20.253 Removing: /var/run/dpdk/spdk_pid71363 00:21:20.253 Removing: /var/run/dpdk/spdk_pid71497 00:21:20.253 Removing: /var/run/dpdk/spdk_pid71524 00:21:20.253 Removing: /var/run/dpdk/spdk_pid71551 00:21:20.253 Removing: /var/run/dpdk/spdk_pid71581 00:21:20.253 Removing: /var/run/dpdk/spdk_pid71673 00:21:20.253 Removing: /var/run/dpdk/spdk_pid71801 00:21:20.253 Removing: /var/run/dpdk/spdk_pid71953 00:21:20.253 Removing: /var/run/dpdk/spdk_pid72035 00:21:20.253 Removing: /var/run/dpdk/spdk_pid72229 00:21:20.253 Removing: /var/run/dpdk/spdk_pid72300 00:21:20.253 Removing: /var/run/dpdk/spdk_pid72380 00:21:20.253 Removing: /var/run/dpdk/spdk_pid72748 00:21:20.253 Removing: /var/run/dpdk/spdk_pid73161 00:21:20.253 Removing: /var/run/dpdk/spdk_pid73162 00:21:20.253 Removing: /var/run/dpdk/spdk_pid73163 00:21:20.253 Removing: /var/run/dpdk/spdk_pid73415 00:21:20.253 Removing: /var/run/dpdk/spdk_pid73679 00:21:20.253 Removing: /var/run/dpdk/spdk_pid74062 00:21:20.253 Removing: /var/run/dpdk/spdk_pid74065 00:21:20.253 Removing: /var/run/dpdk/spdk_pid74392 00:21:20.253 Removing: /var/run/dpdk/spdk_pid74406 00:21:20.253 Removing: /var/run/dpdk/spdk_pid74420 00:21:20.253 Removing: /var/run/dpdk/spdk_pid74451 00:21:20.253 Removing: /var/run/dpdk/spdk_pid74460 00:21:20.253 Removing: /var/run/dpdk/spdk_pid74834 00:21:20.253 Removing: /var/run/dpdk/spdk_pid74883 00:21:20.253 Removing: /var/run/dpdk/spdk_pid75213 00:21:20.253 Removing: /var/run/dpdk/spdk_pid75408 00:21:20.254 Removing: /var/run/dpdk/spdk_pid75837 00:21:20.254 Removing: /var/run/dpdk/spdk_pid76379 00:21:20.254 Removing: /var/run/dpdk/spdk_pid77260 00:21:20.254 Removing: /var/run/dpdk/spdk_pid77894 00:21:20.254 Removing: /var/run/dpdk/spdk_pid77896 00:21:20.254 Removing: /var/run/dpdk/spdk_pid79925 00:21:20.254 Removing: /var/run/dpdk/spdk_pid79978 00:21:20.254 Removing: /var/run/dpdk/spdk_pid80025 00:21:20.254 Removing: /var/run/dpdk/spdk_pid80079 00:21:20.254 Removing: /var/run/dpdk/spdk_pid80187 00:21:20.254 Removing: /var/run/dpdk/spdk_pid80238 00:21:20.254 Removing: /var/run/dpdk/spdk_pid80300 00:21:20.254 Removing: /var/run/dpdk/spdk_pid80360 00:21:20.254 Removing: /var/run/dpdk/spdk_pid80720 00:21:20.254 Removing: /var/run/dpdk/spdk_pid81930 00:21:20.254 Removing: /var/run/dpdk/spdk_pid82066 00:21:20.254 Removing: /var/run/dpdk/spdk_pid82310 00:21:20.254 Removing: /var/run/dpdk/spdk_pid82908 00:21:20.254 Removing: /var/run/dpdk/spdk_pid83062 00:21:20.254 Removing: /var/run/dpdk/spdk_pid83219 00:21:20.254 Removing: /var/run/dpdk/spdk_pid83317 00:21:20.254 Removing: /var/run/dpdk/spdk_pid83485 00:21:20.254 Removing: /var/run/dpdk/spdk_pid83594 00:21:20.254 Removing: /var/run/dpdk/spdk_pid84308 00:21:20.254 Removing: /var/run/dpdk/spdk_pid84349 00:21:20.254 Removing: /var/run/dpdk/spdk_pid84383 00:21:20.254 Removing: /var/run/dpdk/spdk_pid84635 00:21:20.254 Removing: /var/run/dpdk/spdk_pid84670 00:21:20.254 Removing: /var/run/dpdk/spdk_pid84704 00:21:20.254 Removing: /var/run/dpdk/spdk_pid85178 00:21:20.254 Removing: /var/run/dpdk/spdk_pid85188 00:21:20.513 Removing: /var/run/dpdk/spdk_pid85428 00:21:20.513 Removing: /var/run/dpdk/spdk_pid85551 00:21:20.513 Removing: /var/run/dpdk/spdk_pid85567 00:21:20.513 Clean 00:21:20.513 14:00:19 -- common/autotest_common.sh@1453 -- # return 0 00:21:20.513 14:00:19 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:21:20.513 14:00:19 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:20.513 14:00:19 -- common/autotest_common.sh@10 -- # set +x 00:21:20.513 14:00:19 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:21:20.513 14:00:19 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:20.513 14:00:19 -- common/autotest_common.sh@10 -- # set +x 00:21:20.513 14:00:19 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:21:20.513 14:00:19 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:21:20.513 14:00:19 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:21:20.513 14:00:19 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:21:20.513 14:00:19 -- spdk/autotest.sh@398 -- # hostname 00:21:20.513 14:00:19 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:21:20.773 geninfo: WARNING: invalid characters removed from testname! 00:21:42.702 14:00:41 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:45.238 14:00:44 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:47.789 14:00:46 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:50.325 14:00:49 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:52.224 14:00:51 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:54.774 14:00:53 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:57.308 14:00:56 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:21:57.308 14:00:56 -- spdk/autorun.sh@1 -- $ timing_finish 00:21:57.308 14:00:56 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:21:57.308 14:00:56 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:21:57.308 14:00:56 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:21:57.308 14:00:56 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:21:57.308 + [[ -n 5215 ]] 00:21:57.308 + sudo kill 5215 00:21:57.318 [Pipeline] } 00:21:57.335 [Pipeline] // timeout 00:21:57.340 [Pipeline] } 00:21:57.355 [Pipeline] // stage 00:21:57.361 [Pipeline] } 00:21:57.375 [Pipeline] // catchError 00:21:57.384 [Pipeline] stage 00:21:57.387 [Pipeline] { (Stop VM) 00:21:57.399 [Pipeline] sh 00:21:57.679 + vagrant halt 00:22:00.298 ==> default: Halting domain... 00:22:06.884 [Pipeline] sh 00:22:07.161 + vagrant destroy -f 00:22:10.450 ==> default: Removing domain... 00:22:10.463 [Pipeline] sh 00:22:10.746 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:22:10.757 [Pipeline] } 00:22:10.774 [Pipeline] // stage 00:22:10.779 [Pipeline] } 00:22:10.794 [Pipeline] // dir 00:22:10.799 [Pipeline] } 00:22:10.815 [Pipeline] // wrap 00:22:10.822 [Pipeline] } 00:22:10.836 [Pipeline] // catchError 00:22:10.845 [Pipeline] stage 00:22:10.847 [Pipeline] { (Epilogue) 00:22:10.858 [Pipeline] sh 00:22:11.138 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:22:16.418 [Pipeline] catchError 00:22:16.420 [Pipeline] { 00:22:16.431 [Pipeline] sh 00:22:16.711 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:22:16.711 Artifacts sizes are good 00:22:16.719 [Pipeline] } 00:22:16.732 [Pipeline] // catchError 00:22:16.742 [Pipeline] archiveArtifacts 00:22:16.748 Archiving artifacts 00:22:16.903 [Pipeline] cleanWs 00:22:16.913 [WS-CLEANUP] Deleting project workspace... 00:22:16.913 [WS-CLEANUP] Deferred wipeout is used... 00:22:16.919 [WS-CLEANUP] done 00:22:16.921 [Pipeline] } 00:22:16.934 [Pipeline] // stage 00:22:16.939 [Pipeline] } 00:22:16.951 [Pipeline] // node 00:22:16.955 [Pipeline] End of Pipeline 00:22:16.985 Finished: SUCCESS